This paper is concerned with a Bayesian approach to testing hypotheses in statistical inverse problems. Based on the posterior distribution $\Pi \left(\cdot |Y = y\right)$, we want to infer whether a feature $\langle\varphi, u^\dagger\rangle$ of the unknown quantity of interest $u^\dagger$ is positive. This can be done by the so-called maximum a posteriori test. We provide a frequentistic analysis of this test's properties such as level and power, and prove that it is a regularized test in the sense of Kretschmann et al. (2024). Furthermore we provide lower bounds for its power under classical spectral source conditions in case of Gaussian priors. Numerical simulations illustrate its superior performance both in moderately and severely ill-posed situations.
This paper proves a homomorphism between extensional formal semantics and distributional vector space semantics, demonstrating structural compatibility. Formal semantics models meaning as reference, using logical structures to map linguistic expressions to truth conditions, while distributional semantics represents meaning through word vectors derived from contextual usage. By constructing injective mappings that preserve semantic relationships, we show that every semantic function in an extensional model corresponds to a compatible vector space operation. This result respects compositionality and extends to function compositions, constant interpretations, and $n$-ary relations. Rather than pursuing unification, we highlight a mathematical foundation for hybrid cognitive models that integrate symbolic and sub-symbolic reasoning and semantics. These findings support multimodal language processing, aligning `meaning as reference' (Frege, Tarski) with `meaning as use' (Wittgenstein, Firth).
Causality plays an important role in understanding intelligent behavior, and there is a wealth of literature on mathematical models for causality, most of which is focused on causal graphs. Causal graphs are a powerful tool for a wide range of applications, in particular when the relevant variables are known and at the same level of abstraction. However, the given variables can also be unstructured data, like pixels of an image. Meanwhile, the causal variables, such as the positions of objects in the image, can be arbitrary deterministic functions of the given variables. Moreover, the causal variables may form a hierarchy of abstractions, in which the macro-level variables are deterministic functions of the micro-level variables. Causal graphs are limited when it comes to modeling this kind of situation. In the presence of deterministic relationships there is generally no causal graph that satisfies both the Markov condition and the faithfulness condition. We introduce factored space models as an alternative to causal graphs which naturally represent both probabilistic and deterministic relationships at all levels of abstraction. Moreover, we introduce structural independence and establish that it is equivalent to statistical independence in every distribution that factorizes over the factored space. This theorem generalizes the classical soundness and completeness theorem for d-separation.
In this paper, a highly parallel and derivative-free martingale neural network learning method is proposed to solve Hamilton-Jacobi-Bellman (HJB) equations arising from stochastic optimal control problems (SOCPs), as well as general quasilinear parabolic partial differential equations (PDEs). In both cases, the PDEs are reformulated into a martingale formulation such that loss functions will not require the computation of the gradient or Hessian matrix of the PDE solution, while its implementation can be parallelized in both time and spatial domains. Moreover, the martingale conditions for the PDEs are enforced using a Galerkin method in conjunction with adversarial learning techniques, eliminating the need for direct computation of the conditional expectations associated with the martingale property. For SOCPs, a derivative-free implementation of the maximum principle for optimal controls is also introduced. The numerical results demonstrate the effectiveness and efficiency of the proposed method, which is capable of solving HJB and quasilinear parabolic PDEs accurately in dimensions as high as 10,000.
This paper introduces a novel approach to learning sparsity-promoting regularizers for solving linear inverse problems. We develop a bilevel optimization framework to select an optimal synthesis operator, denoted as $B$, which regularizes the inverse problem while promoting sparsity in the solution. The method leverages statistical properties of the underlying data and incorporates prior knowledge through the choice of $B$. We establish the well-posedness of the optimization problem, provide theoretical guarantees for the learning process, and present sample complexity bounds. The approach is demonstrated through examples, including compact perturbations of a known operator and the problem of learning the mother wavelet, showcasing its flexibility in incorporating prior knowledge into the regularization framework. This work extends previous efforts in Tikhonov regularization by addressing non-differentiable norms and proposing a data-driven approach for sparse regularization in infinite dimensions.
In this paper, we develop bound-preserving techniques for the Runge--Kutta (RK) discontinuous Galerkin (DG) method with compact stencils (cRKDG method) for hyperbolic conservation laws. The cRKDG method was recently introduced in [Q. Chen, Z. Sun, and Y. Xing, SIAM J. Sci. Comput., 46: A1327--A1351, 2024]. It enhances the compactness of the standard RKDG method, resulting in reduced data communication, simplified boundary treatments, and improved suitability for local time marching. This work improves the robustness of the cRKDG method by enforcing desirable physical bounds while preserving its compactness, local conservation, and high-order accuracy. Our method is extended from the seminal work of [X. Zhang and C.-W. Shu, J. Comput. Phys., 229: 3091--3120, 2010]. We prove that the cell average of the cRKDG method at each RK stage preserves the physical bounds by expressing it as a convex combination of three types of forward-Euler solutions. A scaling limiter is then applied after each RK stage to enforce pointwise bounds. Additionally, we explore RK methods with less restrictive time step sizes. Because the cRKDG method does not rely on strong-stability-preserving RK time discretization, it avoids its order barriers, allowing us to construct a four-stage, fourth-order bound-preserving cRKDG method. Numerical tests on challenging benchmarks are provided to demonstrate the performance of the proposed method.
We consider estimators obtained by iterates of the conjugate gradient (CG) algorithm applied to the normal equation of prototypical statistical inverse problems. Stopping the CG algorithm early induces regularisation, and optimal convergence rates of prediction and reconstruction error are established in wide generality for an ideal oracle stopping time. Based on this insight, a fully data-driven early stopping rule $\tau$ is constructed, which also attains optimal rates, provided the error in estimating the noise level is not dominant. The error analysis of CG under statistical noise is subtle due to its nonlinear dependence on the observations. We provide an explicit error decomposition and identify two terms in the prediction error, which share important properties of classical bias and variance terms. Together with a continuous interpolation between CG iterates, this paves the way for a comprehensive error analysis of early stopping. In particular, a general oracle-type inequality is proved for the prediction error at $\tau$. For bounding the reconstruction error, a more refined probabilistic analysis, based on concentration of self-normalised Gaussian processes, is developed. The methodology also provides some new insights into early stopping for CG in deterministic inverse problems. A numerical study for standard examples shows good results in practice for early stopping at $\tau$.
We present a novel class of projected gradient (PG) methods for minimizing a smooth but not necessarily convex function over a convex compact set. We first provide a novel analysis of the "vanilla" PG method, achieving the best-known iteration complexity for finding an approximate stationary point of the problem. We then develop an "auto-conditioned" projected gradient (AC-PG) variant that achieves the same iteration complexity without requiring the input of the Lipschitz constant of the gradient or any line search procedure. The key idea is to estimate the Lipschitz constant using first-order information gathered from the previous iterations, and to show that the error caused by underestimating the Lipschitz constant can be properly controlled. We then generalize the PG methods to the stochastic setting, by proposing a stochastic projected gradient (SPG) method and a variance-reduced stochastic gradient (VR-SPG) method, achieving new complexity bounds in different oracle settings. We also present auto-conditioned stepsize policies for both stochastic PG methods and establish comparable convergence guarantees.
In this paper we use memory-distributed level set-based topology optimisation to design three-dimensional periodic piezoelectric materials with enhanced properties. We compare and assess several existing iterative solvers with respect to their weak scalability and find that an approximate Schur complement preconditioned generalized minimal residual method method demonstrates the best performance and scalability for solving the piezoelectric homogenisation equations. We use the developed techniques to computationally design high-resolution piezoelectric metamaterials with enhanced stiffness and piezoelectric properties that yield new insights into material design for sensor, hydrophone, and actuator applications. We suggest two robust structures with no fine-scale features features that exhibit enhanced piezoelectric properties several times larger than those of the base material. We find that level set-based topology optimisation is well suited to problems involving piezoelectricity and has the advantage of avoiding large regions of intermediate density material. Our memory-distributed level-set implementation is open source and provided for practitioners in the community.
The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings.
Deep learning is usually described as an experiment-driven field under continuous criticizes of lacking theoretical foundations. This problem has been partially fixed by a large volume of literature which has so far not been well organized. This paper reviews and organizes the recent advances in deep learning theory. The literature is categorized in six groups: (1) complexity and capacity-based approaches for analyzing the generalizability of deep learning; (2) stochastic differential equations and their dynamic systems for modelling stochastic gradient descent and its variants, which characterize the optimization and generalization of deep learning, partially inspired by Bayesian inference; (3) the geometrical structures of the loss landscape that drives the trajectories of the dynamic systems; (4) the roles of over-parameterization of deep neural networks from both positive and negative perspectives; (5) theoretical foundations of several special structures in network architectures; and (6) the increasingly intensive concerns in ethics and security and their relationships with generalizability.