We study an infinite-dimensional optimization problem that aims to identify the Nemytskii operator in the nonlinear part of a prototypical semilinear elliptic partial differential equation (PDE) which minimizes the distance between the PDE-solution and a given desired state. In contrast to previous works, we consider this identification problem in a low-regularity regime in which the function inducing the Nemytskii operator is a-priori only known to be an element of $H^1_{loc}(\mathbb{R})$. This makes the studied problem class a suitable point of departure for the rigorous analysis of training problems for learning-informed PDEs in which an unknown superposition operator is approximated by means of a neural network with nonsmooth activation functions (ReLU, leaky-ReLU, etc.). We establish that, despite the low regularity of the controls, it is possible to derive a classical stationarity system for local minimizers and to solve the considered problem by means of a gradient projection method. The convergence of the resulting algorithm is proven in the function space setting. It is also shown that the established first-order necessary optimality conditions imply that locally optimal superposition operators share various characteristic properties with commonly used activation functions: They are always sigmoidal, continuously differentiable away from the origin, and typically possess a distinct kink at zero. The paper concludes with numerical experiments which confirm the theoretical findings.
In this paper the authors study a non-linear elliptic-parabolic system, which is motivated by mathematical models for lithium-ion batteries. One state satisfies a parabolic reaction diffusion equation and the other one an elliptic equation. The goal is to determine several scalar parameters in the coupled model in an optimal manner by utilizing a reliable reduced-order approach based on the reduced basis (RB) method. However, the states are coupled through a strongly non-linear function, and this makes the evaluation of online-efficient error estimates difficult. First the well-posedness of the system is proved. Then a Galerkin finite element and RB discretization are described for the coupled system. To certify the RB scheme hierarchical a-posteriori error estimators are utilized in an adaptive trust-region optimization method. Numerical experiments illustrate good approximation properties and efficiencies by using only a relatively small number of reduced basis functions.
Using techniques developed recently in the field of compressed sensing we prove new upper bounds for general (nonlinear) sampling numbers of (quasi-)Banach smoothness spaces in $L^2$. In particular, we show that in relevant cases such as mixed and isotropic weighted Wiener classes or Sobolev spaces with mixed smoothness, sampling numbers in $L^2$ can be upper bounded by best $n$-term trigonometric widths in $L^\infty$. We describe a recovery procedure from $m$ function values based on $\ell^1$-minimization (basis pursuit denoising). With this method, a significant gain in the rate of convergence compared to recently developed linear recovery methods is achieved. In this deterministic worst-case setting we see an additional speed-up of $m^{-1/2}$ (up to log factors) compared to linear methods in case of weighted Wiener spaces. For their quasi-Banach counterparts even arbitrary polynomial speed-up is possible. Surprisingly, our approach allows to recover mixed smoothness Sobolev functions belonging to $S^r_pW(\mathbb{T}^d)$ on the $d$-torus with a logarithmically better rate of convergence than any linear method can achieve when $1 < p < 2$ and $d$ is large. This effect is not present for isotropic Sobolev spaces.
The emerging theory of graph limits exhibits an analytic perspective on graphs, showing that many important concepts and tools in graph theory and its applications can be described more naturally (and sometimes proved more easily) in analytic language. We extend the theory of graph limits to the ordered setting, presenting a limit object for dense vertex-ordered graphs, which we call an orderon. As a special case, this yields limit objects for matrices whose rows and columns are ordered, and for dynamic graphs that expand (via vertex insertions) over time. Along the way, we devise an ordered locality-preserving variant of the cut distance between ordered graphs, showing that two graphs are close with respect to this distance if and only if they are similar in terms of their ordered subgraph frequencies. We show that the space of orderons is compact with respect to this distance notion, which is key to a successful analysis of combinatorial objects through their limits. We derive several applications of the ordered limit theory in extremal combinatorics, sampling, and property testing in ordered graphs. In particular, we prove a new ordered analogue of the well-known result by Alon and Stav [RS\&A'08] on the furthest graph from a hereditary property; this is the first known result of this type in the ordered setting. Unlike the unordered regime, here the random graph model $G(n, p)$ with an ordering over the vertices is not always asymptotically the furthest from the property for some $p$. However, using our ordered limit theory, we show that random graphs generated by a stochastic block model, where the blocks are consecutive in the vertex ordering, are (approximately) the furthest. Additionally, we describe an alternative analytic proof of the ordered graph removal lemma [Alon et al., FOCS'17].
We study parametric inference for hypo-elliptic Stochastic Differential Equations (SDEs). Existing research focuses on a particular class of hypo-elliptic SDEs, with components split into `rough'/`smooth' and noise from rough components propagating directly onto smooth ones, but some critical model classes arising in applications have yet to be explored. We aim to cover this gap, thus analyse the highly degenerate class of SDEs, where components split into further sub-groups. Such models include e.g.~the notable case of generalised Langevin equations. We propose a tailored time-discretisation scheme and provide asymptotic results supporting our scheme in the context of high-frequency, full observations. The proposed discretisation scheme is applicable in much more general data regimes and is shown to overcome biases via simulation studies also in the practical case when only a smooth component is observed. Joint consideration of our study for highly degenerate SDEs and existing research provides a general `recipe' for the development of time-discretisation schemes to be used within statistical methods for general classes of hypo-elliptic SDEs.
Artificial intelligence (AI) shows great potential to reduce the huge cost of solving partial differential equations (PDEs). However, it is not fully realized in practice as neural networks are defined and trained on fixed domains and boundaries. Herein, we propose local neural operator (LNO) for solving transient PDEs on varied domains. It comes together with a handy strategy including boundary treatments, enabling one pre-trained LNO to predict solutions on different domains. For demonstration, LNO learns Navier-Stokes equations from randomly generated data samples, and then the pre-trained LNO is used as an explicit numerical time-marching scheme to solve the flow of fluid on unseen domains, e.g., the flow in a lid-driven cavity and the flow across the cascade of airfoils. It is about 1000$\times$ faster than the conventional finite element method to calculate the flow across the cascade of airfoils. The solving process with pre-trained LNO achieves great efficiency, with significant potential to accelerate numerical calculations in practice.
A non-intrusive proper generalized decomposition (PGD) strategy, coupled with an overlapping domain decomposition (DD) method, is proposed to efficiently construct surrogate models of parametric linear elliptic problems. A parametric multi-domain formulation is presented, with local subproblems featuring arbitrary Dirichlet interface conditions represented through the traces of the finite element functions used for spatial discretization at the subdomain level, with no need for additional auxiliary basis functions. The linearity of the operator is exploited to devise low-dimensional problems with only few active boundary parameters. An overlapping Schwarz method is used to glue the local surrogate models, solving a linear system for the nodal values of the parametric solution at the interfaces, without introducing Lagrange multipliers to enforce the continuity in the overlapping region. The proposed DD-PGD methodology relies on a fully algebraic formulation allowing for real-time computation based on the efficient interpolation of the local surrogate models in the parametric space, with no additional problems to be solved during the execution of the Schwarz algorithm. Numerical results for parametric diffusion and convection-diffusion problems are presented to showcase the accuracy of the DD-PGD approach, its robustness in different regimes and its superior performance with respect to standard high-fidelity DD methods.
We propose an unconditionally energy-stable, orthonormality-preserving, component-wise splitting iterative scheme for the Kohn-Sham gradient flow based model in the electronic structure calculation. We first study the scheme discretized in time but still continuous in space. The component-wise splitting iterative scheme changes one wave function at a time, similar to the Gauss-Seidel iteration for solving a linear equation system. Rigorous mathematical derivations are presented to show our proposed scheme indeed satisfies the desired properties. We then study the fully-discretized scheme, where the space is further approximated by a conforming finite element subspace. For the fully-discretized scheme, not only the preservation of orthogonality and normalization (together we called orthonormalization) can be quickly shown using the same idea as for the semi-discretized scheme, but also the highlight property of the scheme, i.e., the unconditional energy stability can be rigorously proven. The scheme allows us to use large time step sizes and deal with small systems involving only a single wave function during each iteration step. Several numerical experiments are performed to verify the theoretical analysis, where the number of iterations is indeed greatly reduced as compared to similar examples solved by the Kohn-Sham gradient flow based model in the literature.
We develop synthetic notions of oracle computability and Turing reducibility in the Calculus of Inductive Constructions (CIC), the constructive type theory underlying the Coq proof assistant. As usual in synthetic approaches, we employ a definition of oracle computations based on meta-level functions rather than object-level models of computation, relying on the fact that in constructive systems such as CIC all definable functions are computable by construction. Such an approach lends itself well to machine-checked proofs, which we carry out in Coq. There is a tension in finding a good synthetic rendering of the higher-order notion of oracle computability. On the one hand, it has to be informative enough to prove central results, ensuring that all notions are faithfully captured. On the other hand, it has to be restricted enough to benefit from axioms for synthetic computability, which usually concern first-order objects. Drawing inspiration from a definition by Andrej Bauer based on continuous functions in the effective topos, we use a notion of sequential continuity to characterise valid oracle computations. As main technical results, we show that Turing reducibility forms an upper semilattice, transports decidability, and is strictly more expressive than truth-table reducibility, and prove that whenever both a predicate $p$ and its complement are semi-decidable relative to an oracle $q$, then $p$ Turing-reduces to $q$.
Neural operators have emerged as a powerful tool for learning the mapping between infinite-dimensional parameter and solution spaces of partial differential equations (PDEs). In this work, we focus on multiscale PDEs that have important applications such as reservoir modeling and turbulence prediction. We demonstrate that for such PDEs, the spectral bias towards low-frequency components presents a significant challenge for existing neural operators. To address this challenge, we propose a hierarchical attention neural operator (HANO) inspired by the hierarchical matrix approach. HANO features a scale-adaptive interaction range and self-attentions over a hierarchy of levels, enabling nested feature computation with controllable linear cost and encoding/decoding of multiscale solution space. We also incorporate an empirical $H^1$ loss function to enhance the learning of high-frequency components. Our numerical experiments demonstrate that HANO outperforms state-of-the-art (SOTA) methods for representative multiscale problems.
Considerable recent work has focused on methods for analyzing experiments which exhibit treatment interference -- that is, when the treatment status of one unit may affect the response of another unit. Such settings are common in experiments on social networks. We consider a model of treatment interference -- the K-nearest neighbors interference model (KNNIM) -- for which the response of one unit depends not only on the treatment status given to that unit, but also the treatment status of its $K$ ``closest'' neighbors. We derive causal estimands under KNNIM in a way that allows us to identify how each of the $K$-nearest neighbors contributes to the indirect effect of treatment. We propose unbiased estimators for these estimands and derive conservative variance estimates for these unbiased estimators. We then consider extensions of these estimators under an assumption of no weak interaction between direct and indirect effects. We perform a simulation study to determine the efficacy of these estimators under different treatment interference scenarios. We apply our methodology to an experiment designed to assess the impact of a conflict-reducing program in middle schools in New Jersey, and we give evidence that the effect of treatment propagates primarily through a unit's closest connection.