This study aims to construct an efficient and highly accurate numerical method to solve a class of parabolic integro-fractional differential equations, which is based on wavelets and $L2$-$1_\sigma$ scheme. Specifically, the Haar wavelet decomposition is used for grid adaptation and efficient computations, while the high order $L2$-$1_\sigma$ scheme is considered to discretize the time-fractional operator. Second-order discretizations are used to approximate the spatial derivatives to solve the one-dimensional problem, while a repeated quadrature rule based on trapezoidal approximation is employed to discretize the integral operator. In contrast, we use the semi-discretization of the proposed two-dimensional model based on the $L2$-$1_\sigma$ scheme for the fractional operator and composite trapezoidal approximation for the integral part. The spatial derivatives are then approximated using two-dimensional Haar wavelets. In this study, we investigated theoretically and verified numerically the behavior of the proposed higher-order numerical methods. In particular, stability and convergence analyses are conducted. The obtained results are compared with those of some existing techniques through several graphs and tables, and it is shown that the proposed higher-order methods have better accuracy and produce less error compared to the $L1$ scheme in favor of fractional-order integro-partial differential equations.
This paper addresses structured normwise, mixed, and componentwise condition numbers (CNs) for a linear function of the solution to the generalized saddle point problem (GSPP). We present a general framework enabling us to measure the structured CNs of the individual solution components and derive their explicit formulae when the input matrices have symmetric, Toeplitz, or some general linear structures. In addition, compact formulae for the unstructured CNs are obtained, which recover previous results on CNs for GSPPs for specific choices of the linear function. Furthermore, an application of the derived structured CNs is provided to determine the structured CNs for the weighted Teoplitz regularized least-squares problems and Tikhonov regularization problems, which retrieves some previous studies in the literature.
This work focuses on the numerical approximations of neutral stochastic delay differential equations with their drift and diffusion coefficients growing super-linearly with respect to both delay variables and state variables. Under generalized monotonicity conditions, we prove that the backward Euler method not only converges strongly in the mean square sense with order $1/2$, but also inherit the mean square exponential stability of the original equations. As a byproduct, we obtain the same results on convergence rate and exponential stability of the backward Euler method for stochastic delay differential equations with generalized monotonicity conditions. These theoretical results are finally supported by several numerical experiments.
Karppa & Kaski (2019) proposed a novel ``broken" or ``opportunistic" matrix multiplication algorithm, based on a variant of Strassen's algorithm, and used this to develop new algorithms for Boolean matrix multiplication, among other tasks. Their algorithm can compute Boolean matrix multiplication in $O(n^{2.778})$ time. While asymptotically faster matrix multiplication algorithms exist, most such algorithms are infeasible for practical problems. We describe an alternative way to use the broken multiplication algorithm to approximately compute matrix multiplication, either for real-valued or Boolean matrices. In brief, instead of running multiple iterations of the broken algorithm on the original input matrix, we form a new larger matrix by sampling and run a single iteration of the broken algorithm on it. Asymptotically, our algorithm has runtime $O(n^{2.763})$, a slight improvement over the Karppa-Kaski algorithm. Since the goal is to obtain new practical matrix-multiplication algorithms, we also estimate the concrete runtime for our algorithm for some large-scale sample problems. It appears that for these parameters, further optimizations are still needed to make our algorithm competitive.
We give a complete complexity classification for the problem of finding a solution to a given system of equations over a fixed finite monoid, given that a solut ion over a more restricted monoid exists. As a corollary, we obtain a complexity classification for the same problem over groups.
We present a study on asymptotically compatible Galerkin discretizations for a class of parametrized nonlinear variational problems. The abstract analytical framework is based on variational convergence, or Gamma-convergence. We demonstrate the broad applicability of the theoretical framework by developing asymptotically compatible finite element discretizations of some representative nonlinear nonlocal variational problems on a bounded domain. These include nonlocal nonlinear problems with classically-defined, local boundary constraints through heterogeneous localization at the boundary, as well as nonlocal problems posed on parameter-dependent domains.
Continuous-time algebraic Riccati equations can be found in many disciplines in different forms. In the case of small-scale dense coefficient matrices, stabilizing solutions can be computed to all possible formulations of the Riccati equation. This is not the case when it comes to large-scale sparse coefficient matrices. In this paper, we provide a reformulation of the Newton-Kleinman iteration scheme for continuous-time algebraic Riccati equations using indefinite symmetric low-rank factorizations. This allows the application of the method to the case of general large-scale sparse coefficient matrices. We provide convergence results for several prominent realizations of the equation and show in numerical examples the effectiveness of the approach.
We introduce a novel continual learning method based on multifidelity deep neural networks. This method learns the correlation between the output of previously trained models and the desired output of the model on the current training dataset, limiting catastrophic forgetting. On its own the multifidelity continual learning method shows robust results that limit forgetting across several datasets. Additionally, we show that the multifidelity method can be combined with existing continual learning methods, including replay and memory aware synapses, to further limit catastrophic forgetting. The proposed continual learning method is especially suited for physical problems where the data satisfy the same physical laws on each domain, or for physics-informed neural networks, because in these cases we expect there to be a strong correlation between the output of the previous model and the model on the current training domain.
We address the problem of constructing approximations based on orthogonal polynomials that preserve an arbitrary set of moments of a given function without loosing the spectral convergence property. To this aim, we compute the constrained polynomial of best approximation for a generic basis of orthogonal polynomials. The construction is entirely general and allows us to derive structure preserving numerical methods for partial differential equations that require the conservation of some moments of the solution, typically representing relevant physical quantities of the problem. These properties are essential to capture with high accuracy the long-time behavior of the solution. We illustrate with the aid of several numerical applications to Fokker-Planck equations the generality and the performances of the present approach.
We present a generic framework for gradient reconstruction schemes on unstructured meshes using the notion of a dyadic sum-vector product. The proposed formulation reconstructs centroidal gradients of a scalar from its directional derivatives along specific directions in a suitably defined neighbourhood. We show that existing gradient reconstruction schemes can be encompassed within this framework by a suitable choice of the geometric vectors that define the dyadic sum tensor. The proposed framework also allows us to re-interpret certain hybrid schemes, which might not be derivable through traditional routes. Additionally, a generalization of flexible gradient schemes is proposed that can be employed to enhance the robustness of consistent gradient schemes without compromising on the accuracy of the computed gradients.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.