In this paper we propose a local projector for truncated hierarchical B-splines (THB-splines). The local THB-spline projector is an adaptation of the B\'ezier projector proposed by Thomas et al. (Comput Methods Appl Mech Eng 284, 2015) for B-splines and analysis-suitable T-splines (AS T-splines). For THB-splines, there are elements on which the restrictions of THB-splines are linearly dependent, contrary to B-splines and AS T-splines. Therefore, we cluster certain local mesh elements together such that the THB-splines with support over these clusters are linearly independent, and the B\'ezier projector is adapted to use these clusters. We introduce general extensions for which optimal convergence is shown theoretically and numerically. In addition, a simple adaptive refinement scheme is introduced and compared to Giust et al. (Comput. Aided Geom. Des. 80, 2020), where we find that our simple approach shows promise.
Formalized $1$-category theory forms a core component of various libraries of mathematical proofs. However, more sophisticated results in fields from algebraic topology to theoretical physics, where objects have "higher structure," rely on infinite-dimensional categories in place of $1$-dimensional categories, and $\infty$-category theory has thusfar proved unamenable to computer formalization. Using a new proof assistant called Rzk, which is designed to support Riehl--Shulman's simplicial extension of homotopy type theory for synthetic $\infty$-category theory, we provide the first formalizations of results from $\infty$-category theory. This includes in particular a formalization of the Yoneda lemma, often regarded as the fundamental theorem of category theory, a theorem which roughly states that an object of a given category is determined by its relationship to all of the other objects of the category. A key feature of our framework is that, thanks to the synthetic theory, many constructions are automatically natural or functorial. We plan to use Rzk to formalize further results from $\infty$-category theory, such as the theory of limits and colimits and adjunctions.
This work is concerned with solving high-dimensional Fokker-Planck equations with the novel perspective that solving the PDE can be reduced to independent instances of density estimation tasks based on the trajectories sampled from its associated particle dynamics. With this approach, one sidesteps error accumulation occurring from integrating the PDE dynamics on a parameterized function class. This approach significantly simplifies deployment, as one is free of the challenges of implementing loss terms based on the differential equation. In particular, we introduce a novel class of high-dimensional functions called the functional hierarchical tensor (FHT). The FHT ansatz leverages a hierarchical low-rank structure, offering the advantage of linearly scalable runtime and memory complexity relative to the dimension count. We introduce a sketching-based technique that performs density estimation over particles simulated from the particle dynamics associated with the equation, thereby obtaining a representation of the Fokker-Planck solution in terms of our ansatz. We apply the proposed approach successfully to three challenging time-dependent Ginzburg-Landau models with hundreds of variables.
In this work we propose an adaptive multilevel version of subset simulation to estimate the probability of rare events for complex physical systems. Given a sequence of nested failure domains of increasing size, the rare event probability is expressed as a product of conditional probabilities. The proposed new estimator uses different model resolutions and varying numbers of samples across the hierarchy of nested failure sets. In order to dramatically reduce the computational cost, we construct the intermediate failure sets such that only a small number of expensive high-resolution model evaluations are needed, whilst the majority of samples can be taken from inexpensive low-resolution simulations. A key idea in our new estimator is the use of a posteriori error estimators combined with a selective mesh refinement strategy to guarantee the critical subset property that may be violated when changing model resolution from one failure set to the next. The efficiency gains and the statistical properties of the estimator are investigated both theoretically via shaking transformations, as well as numerically. On a model problem from subsurface flow, the new multilevel estimator achieves gains of more than a factor 60 over standard subset simulation for a practically relevant relative error of 25%.
This work studies the global convergence and implicit bias of Gauss Newton's (GN) when optimizing over-parameterized one-hidden layer networks in the mean-field regime. We first establish a global convergence result for GN in the continuous-time limit exhibiting a faster convergence rate compared to GD due to improved conditioning. We then perform an empirical study on a synthetic regression task to investigate the implicit bias of GN's method. While GN is consistently faster than GD in finding a global optimum, the learned model generalizes well on test data when starting from random initial weights with a small variance and using a small step size to slow down convergence. Specifically, our study shows that such a setting results in a hidden learning phenomenon, where the dynamics are able to recover features with good generalization properties despite the model having sub-optimal training and test performances due to an under-optimized linear layer. This study exhibits a trade-off between the convergence speed of GN and the generalization ability of the learned solution.
In this work we consider the two dimensional instationary Navier-Stokes equations with homogeneous Dirichlet/no-slip boundary conditions. We show error estimates for the fully discrete problem, where a discontinuous Galerkin method in time and inf-sup stable finite elements in space are used. Recently, best approximation type error estimates for the Stokes problem in the $L^\infty(I;L^2(\Omega))$, $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms have been shown. The main result of the present work extends the error estimate in the $L^\infty(I;L^2(\Omega))$ norm to the Navier-Stokes equations, by pursuing an error splitting approach and an appropriate duality argument. In order to discuss the stability of solutions to the discrete primal and dual equations, a specially tailored discrete Gronwall lemma is presented. The techniques developed towards showing the $L^\infty(I;L^2(\Omega))$ error estimate, also allow us to show best approximation type error estimates in the $L^2(I;H^1(\Omega))$ and $L^2(I;L^2(\Omega))$ norms, which complement this work.
In this paper, we discuss some numerical realizations of Shannon's sampling theorem. First we show the poor convergence of classical Shannon sampling sums by presenting sharp upper and lower bounds of the norm of the Shannon sampling operator. In addition, it is known that in the presence of noise in the samples of a bandlimited function, the convergence of Shannon sampling series may even break down completely. To overcome these drawbacks, one can use oversampling and regularization with a convenient window function. Such a window function can be chosen either in frequency domain or in time domain. We especially put emphasis on the comparison of these two approaches in terms of error decay rates. It turns out that the best numerical results are obtained by oversampling and regularization in time domain using a sinh-type window function or a continuous Kaiser-Bessel window function, which results in an interpolating approximation with localized sampling. Several numerical experiments illustrate the theoretical results.
Subsurface storage of CO$_2$ is an important means to mitigate climate change, and to investigate the fate of CO$_2$ over several decades in vast reservoirs, numerical simulation based on realistic models is essential. Faults and other complex geological structures introduce modeling challenges as their effects on storage operations are uncertain due to limited data. In this work, we present a computational framework for forward propagation of uncertainty, including stochastic upscaling and copula representation of flow functions for a CO$_2$ storage site using the Vette fault zone in the Smeaheia formation in the North Sea as a test case. The upscaling method leads to a reduction of the number of stochastic dimensions and the cost of evaluating the reservoir model. A viable model that represents the upscaled data needs to capture dependencies between variables, and allow sampling. Copulas provide representation of dependent multidimensional random variables and a good fit to data, allow fast sampling, and coupling to the forward propagation method via independent uniform random variables. The non-stationary correlation within some of the upscaled flow function are accurately captured by a data-driven transformation model. The uncertainty in upscaled flow functions and other parameters are propagated to uncertain leakage estimates using numerical reservoir simulation of a two-phase system. The expectations of leakage are estimated by an adaptive stratified sampling technique, where samples are sequentially concentrated to regions of the parameter space to greedily maximize variance reduction. We demonstrate cost reduction compared to standard Monte Carlo of one or two orders of magnitude for simpler test cases with only fault and reservoir layer permeabilities assumed uncertain, and factors 2--8 cost reduction for stochastic multi-phase flow properties and more complex stochastic models.
Let $-A$ be the generator of a bounded $C_0$-semigroup $(e^{-tA})_{t \geq 0}$ on a Hilbert space. First we study the long-time asymptotic behavior of the Cayley transform $V_{\omega}(A) := (A-\omega I) (A+\omega I)^{-1}$ with $\omega >0$. We give a decay estimate for $\|V_{\omega}(A)^nA^{-1}\|$ when $(e^{-tA})_{t \geq 0}$ is polynomially stable. Considering the case where the parameter $\omega$ varies, we estimate $\|\prod_{k=1}^n (V_{\omega_k}(A))A^{-1}\|$ for exponentially stable $C_0$-semigroups $(e^{-tA})_{t \geq 0}$. Next we show that if the generator $-A$ of the bounded $C_0$-semigroup has a bounded inverse, then $\sup_{t \geq 0} \|e^{-tA^{-1}} A^{-\alpha} \| < \infty$ for all $\alpha >0$. We also present an estimate for the rate of decay of $\|e^{-tA^{-1}} A^{-1} \|$, assuming that $(e^{-tA})_{t \geq 0}$ is polynomially stable. To obtain these results, we use operator norm estimates offered by a functional calculus called the $\mathcal{B}$-calculus.
In this study, we present a precise anisotropic interpolation error estimate for the Morley finite element method (FEM) and apply it to fourth-order elliptical equations. We did not impose a shape-regularity mesh condition for the analysis. Therefore, anisotropic meshes can be used. The main contributions of this study include providing new proof of the consistency term. This enabled us to obtain an anisotropic consistency error estimate. The core idea of the proof involves using the relationship between the Raviart--Thomas and Morley finite element spaces. Our results show optimal convergence rates and imply that the modified Morley FEM may be effective for errors.
BCH codes are an interesting class of cyclic codes due to their efficient encoding and decoding algorithms. In the past sixty years, a lot of progress on the study of BCH codes has been made, but little is known about the properties of their duals. Recently, in order to study the duals of BCH codes and the lower bounds on their minimum distances, a new concept called dually-BCH code was proposed by authors in \cite{GDL21}. In this paper, the lower bounds on the minimum distances of the duals of narrow-sense BCH codes with length $\frac{q^m-1}{\lambda}$ over $\mathbb{F}_q$ are developed, where $\lambda$ is a positive integer satisfying $\lambda\, |\, q-1$, or $\lambda=q^s-1$ and $s\, |\,m$. In addition, the sufficient and necessary conditions in terms of the designed distances for these codes being dually-BCH codes are presented. Many considered codes in \cite{GDL21} and \cite{Wang23} are the special cases of the codes showed in this paper. Our lower bounds on the minimum distances of the duals of BCH codes include the bounds stated in \cite{GDL21} as a special case. Several examples show that the lower bounds are good in some cases.