We propose a new numerical method for $\alpha$-dissipative solutions of the Hunter-Saxton equation, where $\alpha$ belongs to $W^{1, \infty}(\mathbb{R}, [0, 1))$. The method combines a projection operator with a generalized method of characteristics and an iteration scheme, which is based on enforcing minimal time steps whenever breaking times cluster. Numerical examples illustrate that these minimal time steps increase the efficiency of the algorithm substantially. Moreover, convergence of the wave profile is shown in $C([0, T], L^{\infty}(\mathbb{R}))$ for any finite $T \geq 0$.
We consider estimators obtained by iterates of the conjugate gradient (CG) algorithm applied to the normal equation of prototypical statistical inverse problems. Stopping the CG algorithm early induces regularisation, and optimal convergence rates of prediction and reconstruction error are established in wide generality for an ideal oracle stopping time. Based on this insight, a fully data-driven early stopping rule $\tau$ is constructed, which also attains optimal rates, provided the error in estimating the noise level is not dominant. The error analysis of CG under statistical noise is subtle due to its nonlinear dependence on the observations. We provide an explicit error decomposition and identify two terms in the prediction error, which share important properties of classical bias and variance terms. Together with a continuous interpolation between CG iterates, this paves the way for a comprehensive error analysis of early stopping. In particular, a general oracle-type inequality is proved for the prediction error at $\tau$. For bounding the reconstruction error, a more refined probabilistic analysis, based on concentration of self-normalised Gaussian processes, is developed. The methodology also provides some new insights into early stopping for CG in deterministic inverse problems. A numerical study for standard examples shows good results in practice for early stopping at $\tau$.
In a Jacobi--Davidson (JD) type method for singular value decomposition (SVD) problems, called JDSVD, a large symmetric and generally indefinite correction equation is solved iteratively at each outer iteration, which constitutes the inner iterations and dominates the overall efficiency of JDSVD. In this paper, a convergence analysis is made on the minimal residual (MINRES) method for the correction equation. Motivated by the results obtained, at each outer iteration a new correction equation is derived that extracts useful information from current subspaces to construct effective preconditioners for the correction equation and is proven to retain the same convergence of outer iterations of JDSVD.The resulting method is called inner preconditioned JDSVD (IPJDSVD) method; it is also a new JDSVD method, and any viable preconditioner for the correction equations in JDSVD is straightforwardly applicable to those in IPJDSVD. Convergence results show that MINRES for the new correction equation can converge much faster when there is a cluster of singular values closest to a given target. A new thick-restart IPJDSVD algorithm with deflation and purgation is proposed that simultaneously accelerates the outer and inner convergence of the standard thick-restart JDSVD and computes several singular triplets. Numerical experiments justify the theory and illustrate the considerable superiority of IPJDSVD to JDSVD, and demonstrate that a similar two-stage IPJDSVD algorithm substantially outperforms the most advanced PRIMME\_SVDS software nowadays for computing the smallest singular triplets.
Numerical schemes that conserve invariants have demonstrated superior performance in various contexts, and several unified methods have been developed for constructing such schemes. However, the mathematical properties of these schemes remain poorly understood, except in norm-preserving cases. This study introduces a novel analytical framework applicable to general energy-preserving schemes. The proposed framework is applied to Korteweg-de Vries (KdV)-type equations, establishing global existence and convergence estimates for the numerical solutions.
This work presents a numerical analysis of a Discontinuous Galerkin (DG) method for a transformed master equation modeling an open quantum system: a quantum sub-system interacting with a noisy environment. It is shown that the presented transformed master equation has a reduced computational cost in comparison to a Wigner-Fokker-Planck model of the same system for the general case of non-harmonic potentials via DG schemes. Specifics of a Discontinuous Galerkin (DG) numerical scheme adequate for the system of convection-diffusion equations obtained for our Lindblad master equation in position basis are presented. This lets us solve computationally the transformed system of interest modeling our open quantum system problem. The benchmark case of a harmonic potential is then presented, for which the numerical results are compared against the analytical steady-state solution of this problem. Two non-harmonic cases are then presented: the linear and quartic potentials are modeled via our DG framework, for which we show our numerical results.
The continental plates of Earth are known to drift over a geophysical timescale, and their interactions have lead to some of the most spectacular geoformations of our planet while also causing natural disasters such as earthquakes and volcanic activity. Understanding the dynamics of interacting continental plates is thus significant. In this work, we present a fluid mechanical investigation of the plate motion, interaction, and dynamics. Through numerical experiments, we examine the coupling between a convective fluid and plates floating on top of it. With physical modeling, we show the coupling is both mechanical and thermal, leading to the thermal blanket effect: the floating plate is not only transported by the fluid flow beneath, it also prevents the heat from leaving the fluid, leading to a convective flow that further affects the plate motion. By adding several plates to such a coupled fluid-structure interaction, we also investigate how floating plates interact with each other and show that, under proper conditions, small plates can converge into a supercontinent.
We address the regression problem for a general function $f:[-1,1]^d\to \mathbb R$ when the learner selects the training points $\{x_i\}_{i=1}^n$ to achieve a uniform error bound across the entire domain. In this setting, known historically as nonparametric regression, we aim to establish a sample complexity bound that depends solely on the function's degree of smoothness. Assuming periodicity at the domain boundaries, we introduce PADUA, an algorithm that, with high probability, provides performance guarantees optimal up to constant or logarithmic factors across all problem parameters. Notably, PADUA is the first parametric algorithm with optimal sample complexity for this setting. Due to this feature, we prove that, differently from the non-parametric state of the art, PADUA enjoys optimal space complexity in the prediction phase. To validate these results, we perform numerical experiments over functions coming from real audio data, where PADUA shows comparable performance to state-of-the-art methods, while requiring only a fraction of the computational time.
The regularity of solutions to the stochastic nonlinear wave equation plays a critical role in the accuracy and efficiency of numerical algorithms. Rough or discontinuous initial conditions pose significant challenges, often leading to a loss of accuracy and reduced computational efficiency in existing methods. In this study, we address these challenges by developing a novel and efficient numerical algorithm specifically designed for computing rough solutions of the stochastic nonlinear wave equation, while significantly relaxing the regularity requirements on the initial data. By leveraging the intrinsic structure of the stochastic nonlinear wave equation and employing advanced tools from harmonic analysis, we construct a time discretization method that achieves robust convergence for initial values \((u^{0}, v^{0}) \in H^{\gamma} \times H^{\gamma-1}\) for all \(\gamma > 0\). Notably, our method attains an improved error rate of \(O(\tau^{2\gamma-})\) in one and two dimensions for \(\gamma \in (0, \frac{1}{2}]\), and \(O(\tau^{\max(\gamma, 2\gamma - \frac{1}{2}-)})\) in three dimensions for \(\gamma \in (0, \frac{3}{4}]\), where \(\tau\) denotes the time step size. These convergence rates surpass those of existing numerical methods under the same regularity conditions, underscoring the advantage of our approach. To validate the performance of our method, we present extensive numerical experiments that demonstrate its superior accuracy and computational efficiency compared to state-of-the-art methods. These results highlight the potential of our approach to enable accurate and efficient simulations of stochastic wave phenomena even in the presence of challenging initial conditions.
Many of the primal ingredients of convex optimization extend naturally from Euclidean to Hadamard spaces $\unicode{x2014}$ nonpositively curved metric spaces like Euclidean, Hilbert, and hyperbolic spaces, metric trees, and more general CAT(0) cubical complexes. Linear structure, however, and the duality theory it supports are absent. Nonetheless, we introduce a new type of subgradient for convex functions on Hadamard spaces, based on Busemann functions. This notion supports a splitting subgradient method with guaranteed complexity bounds. In particular, the algorithm solves $p$-mean problems in general Hadamard spaces: we illustrate by computing medians in BHV tree space.
The notion of a non-deterministic logical matrix (where connectives are interpreted as multi-functions) extends the traditional semantics for propositional logics based on logical matrices (where connectives are interpreted as functions). This extension allows for finitely characterizing a much wider class of logics, and has proven decisive in a myriad of recent compositionality results. In this paper we show that the added expressivity brought by non-determinism also has its drawbacks, and in particular that the problem of determining whether two given finite non-deterministic matrices are equivalent, in the sense that they induce the same logic, becomes undecidable. We also discuss some workable sufficient conditions and particular cases, namely regarding rexpansion homomorphisms and bridges to calculi.
We incorporate strong negation in the theory of computable functionals TCF, a common extension of Plotkin's PCF and G\"{o}del's system $\mathbf{T}$, by defining simultaneously strong negation $A^{\mathbf{N}}$ of a formula $A$ and strong negation $P^{\mathbf{N}}$ of a predicate $P$ in TCF. As a special case of the latter, we get strong negation of an inductive and a coinductive predicate of TCF. We prove appropriate versions of the Ex falso quodlibet and of double negation elimination for strong negation in TCF. We introduce the so-called tight formulas of TCF i.e., formulas implied by the weak negation of their strong negation, and the relative tight formulas. We present various case-studies and examples, which reveal the naturality of our Definition of strong negation in TCF and justify the use of TCF as a formal system for a large part of Bishop-style constructive mathematics.