In this paper we develop and analyse domain decomposition methods for linear systems of equations arising from conforming finite element discretisations of positive Maxwell-type equations, namely for $\mathbf{H}(\mathbf{curl})$ problems. It is well known that convergence of domain decomposition methods rely heavily on the efficiency of the coarse space used in the second level. We design adaptive coarse spaces that complement a near-kernel space made from the gradient of scalar functions. The new class of preconditioner is inspired by the idea of subspace decomposition, but based on spectral coarse spaces, and is specially designed for curl-conforming discretisations of Maxwell's equations in heterogeneous media on general domains which may have holes. Our approach has wider applicability and theoretical justification than the well-known Hiptmair-Xu auxiliary space preconditioner, with results extending to the variable coefficient case and non-convex domains at the expense of a larger coarse space.
This study proposes a method for aggregating/synthesizing global and local sub-models for fast and flexible spatial regression modeling. Eigenvector spatial filtering (ESF) was used to model spatially varying coefficients and spatial dependence in the residuals by sub-model, while the generalized product-of-experts method was used to aggregate these sub-models. The major advantages of the proposed method are as follows: (i) it is highly scalable for large samples in terms of accuracy and computational efficiency; (ii) it is easily implemented by estimating sub-models independently first and aggregating/averaging them thereafter; and (iii) likelihood-based inference is available because the marginal likelihood is available in closed-form. The accuracy and computational efficiency of the proposed method are confirmed using Monte Carlo simulation experiments. This method was then applied to residential land price analysis in Japan. The results demonstrate the usefulness of this method for improving the interpretability of spatially varying coefficients. The proposed method is implemented in an R package spmoran (version 0.3.0 or later).
This work proposes a novel variational approximation of partial differential equations on moving geometries determined by explicit boundary representations. The benefits of the proposed formulation are the ability to handle large displacements of explicitly represented domain boundaries without generating body-fitted meshes and remeshing techniques. For the space discretization, we use a background mesh and an unfitted method that relies on integration on cut cells only. We perform this intersection by using clipping algorithms. To deal with the mesh movement, we pullback the equations to a reference configuration (the spatial mesh at the initial time slab times the time interval) that is constant in time. This way, the geometrical intersection algorithm is only required in 3D, another key property of the proposed scheme. At the end of the time slab, we compute the deformed mesh, intersect the deformed boundary with the background mesh, and consider an exact transfer operator between meshes to compute jump terms in the time discontinuous Galerkin integration. The transfer is also computed using geometrical intersection algorithms. We demonstrate the applicability of the method to fluid problems around rotating (2D and 3D) geometries described by oriented boundary meshes. We also provide a set of numerical experiments that show the optimal convergence of the method.
We design an algorithm for computing the $L$-series associated to an Anderson $t$-motives, exhibiting quasilinear complexity with respect to the target precision. Based on experiments, we conjecture that the order of vanishing at $T=1$ of the $v$-adic $L$-series of a given Anderson $t$-motive with good reduction does not depend on the finite place $v$.
This paper introduces HALLaR, a new first-order method for solving large-scale semidefinite programs (SDPs) with bounded domain. HALLaR is an inexact augmented Lagrangian (AL) method where the AL subproblems are solved by a novel hybrid low-rank (HLR) method. The recipe behind HLR is based on two key ingredients: 1) an adaptive inexact proximal point method with inner acceleration; 2) Frank-Wolfe steps to escape from spurious local stationary points. In contrast to the low-rank method of Burer and Monteiro, HALLaR finds a near-optimal solution (with provable complexity bounds) of SDP instances satisfying strong duality. Computational results comparing HALLaR to state-of-the-art solvers on several large SDP instances arising from maximum stable set, phase retrieval, and matrix completion show that the former finds higher accurate solutions in substantially less CPU time than the latter ones. For example, in less than 20 minutes, HALLaR can solve a maximum stable set SDP instance with dimension pair $(n,m)\approx (10^6,10^7)$ within $10^{-5}$ relative precision.
The paper presents a spectral representation for general type two-sided discrete time signals from $\ell_\infty$, i.e for all bounded discrete time signals, including signals that do not vanish at $\pm\infty$. This representation allows to extend on the general type signals from $\ell_\infty$ the notions of transfer functions, spectrum gaps, and filters, and to obtain some frequency conditions of predictability and data recoverability.
A crucial challenge for solving problems in conflict research is in leveraging the semi-supervised nature of the data that arise. Observed response data such as counts of battle deaths over time indicate latent processes of interest such as intensity and duration of conflicts, but defining and labeling instances of these unobserved processes requires nuance and imprecision. The availability of such labels, however, would make it possible to study the effect of intervention-related predictors - such as ceasefires - directly on conflict dynamics (e.g., latent intensity) rather than through an intermediate proxy like observed counts of battle deaths. Motivated by this problem and the new availability of the ETH-PRIO Civil Conflict Ceasefires data set, we propose a Bayesian autoregressive (AR) hidden Markov model (HMM) framework as a sufficiently flexible machine learning approach for semi-supervised regime labeling with uncertainty quantification. We motivate our approach by illustrating the way it can be used to study the role that ceasefires play in shaping conflict dynamics. This ceasefires data set is the first systematic and globally comprehensive data on ceasefires, and our work is the first to analyze this new data and to explore the effect of ceasefires on conflict dynamics in a comprehensive and cross-country manner.
This paper introduces a new series of methods which combine modal decomposition algorithms, such as singular value decomposition and high-order singular value decomposition, and deep learning architectures to repair, enhance, and increase the quality and precision of numerical and experimental data. A combination of two- and three-dimensional, numerical and experimental dasasets are used to demonstrate the reconstruction capacity of the presented methods, showing that these methods can be used to reconstruct any type of dataset, showing outstanding results when applied to highly complex data, which is noisy. The combination of benefits of these techniques results in a series of data-driven methods which are capable of repairing and/or enhancing the resolution of a dataset by identifying the underlying physics that define the data, which is incomplete or under-resolved, filtering any existing noise. These methods and the Python codes are included in the first release of ModelFLOWs-app.
The large-sample behavior of non-degenerate multivariate $U$-statistics of arbitrary degree is investigated under the assumption that their kernel depends on parameters that can be estimated consistently. Mild regularity conditions are given which guarantee that once properly normalized, such statistics are asymptotically multivariate Gaussian both under the null hypothesis and sequences of local alternatives. The work of Randles (1982, Ann. Statist.) is extended in three ways: the data and the kernel values can be multivariate rather than univariate, the limiting behavior under local alternatives is studied for the first time, and the effect of knowing some of the nuisance parameters is quantified. These results can be applied to a broad range of goodness-of-fit testing contexts, as shown in one specific example.
In this paper we provide a new linear sampling method based on the same data but a different definition of the data operator for two inverse problems: the multi-frequency inverse source problem for a fixed observation direction and the Born inverse scattering problems. We show that the associated regularized linear sampling indicator converges to the average of the unknown in a small neighborhood as the regularization parameter approaches to zero. We develop both a shape identification theory and a parameter identification theory which are stimulated, analyzed, and implemented with the help of the prolate spheroidal wave functions and their generalizations. We further propose a prolate-based implementation of the linear sampling method and provide numerical experiments to demonstrate how this linear sampling method is capable of reconstructing both the shape and the parameter.
In this paper, we consider a numerical method for the multi-term Caputo-Fabrizio time-fractional diffusion equations (with orders $\alpha_i\in(0,1)$, $i=1,2,\cdots,n$). The proposed method employs a fast finite difference scheme to approximate multi-term fractional derivatives in time, requiring only $O(1)$ storage and $O(N_T)$ computational complexity, where $N_T$ denotes the total number of time steps. Then we use a Legendre spectral collocation method for spatial discretization. The stability and convergence of the scheme have been thoroughly discussed and rigorously established. We demonstrate that the proposed scheme is unconditionally stable and convergent with an order of $O(\left(\Delta t\right)^{2}+N^{-m})$, where $\Delta t$, $N$, and $m$ represent the timestep size, polynomial degree, and regularity in the spatial variable of the exact solution, respectively. Numerical results are presented to validate the theoretical predictions.