We establish a general convergence theory of the Rayleigh--Ritz method and the refined Rayleigh--Ritz method for computing some simple eigenpair ($\lambda_{*},x_{*}$) of a given analytic nonlinear eigenvalue problem (NEP). In terms of the deviation $\varepsilon$ of $x_{*}$ from a given subspace $\mathcal{W}$, we establish a priori convergence results on the Ritz value, the Ritz vector and the refined Ritz vector, and present sufficient convergence conditions for them. The results show that, as $\varepsilon\rightarrow 0$, there is a Ritz value that unconditionally converges to $\lambda_*$ and the corresponding refined Ritz vector does so too but the Ritz vector may fail to converge and even may not be unique. We also present an error bound for the approximate eigenvector in terms of the computable residual norm of a given approximate eigenpair, and give lower and upper bounds for the error of the refined Ritz vector and the Ritz vector as well as for that of the corresponding residual norms. These results nontrivially extend some convergence results on these two methods for the linear eigenvalue problem to the NEP. Examples are constructed to illustrate some of the results.
We derive and analyze a symmetric interior penalty discontinuous Galerkin scheme for the approximation of the second-order form of the radiative transfer equation in slab geometry. Using appropriate trace lemmas, the analysis can be carried out as for more standard elliptic problems. Supporting examples show the accuracy and stability of the method also numerically, for different polynomial degrees. For discretization, we employ quad-tree grids, which allow for local refinement in phase-space, and we show exemplary that adaptive methods can efficiently approximate discontinuous solutions. We investigate the behavior of hierarchical error estimators and error estimators based on local averaging.
The arrival of AI techniques in computations, with the potential for hallucinations and non-robustness, has made trustworthiness of algorithms a focal point. However, trustworthiness of the many classical approaches are not well understood. This is the case for feature selection, a classical problem in the sciences, statistics, machine learning etc. Here, the LASSO optimisation problem is standard. Despite its widespread use, it has not been established when the output of algorithms attempting to compute support sets of minimisers of LASSO in order to do feature selection can be trusted. In this paper we establish how no (randomised) algorithm that works on all inputs can determine the correct support sets (with probability $> 1/2$) of minimisers of LASSO when reading approximate input, regardless of precision and computing power. However, we define a LASSO condition number and design an efficient algorithm for computing these support sets provided the input data is well-posed (has finite condition number) in time polynomial in the dimensions and logarithm of the condition number. For ill-posed inputs the algorithm runs forever, hence, it will never produce a wrong answer. Furthermore, the algorithm computes an upper bound for the condition number when this is finite. Finally, for any algorithm defined on an open set containing a point with infinite condition number, there is an input for which the algorithm will either run forever or produce a wrong answer. Our impossibility results stem from generalised hardness of approximation -- within the Solvability Complexity Index (SCI) hierarchy framework -- that generalises the classical phenomenon of hardness of approximation.
The Neumann--Neumann method is a commonly employed domain decomposition method for linear elliptic equations. However, the method exhibits slow convergence when applied to semilinear equations and does not seem to converge at all for certain quasilinear equations. We therefore propose two modified Neumann--Neumann methods that have better convergence properties and require less computations. We provide numerical results that show the advantages of these methods when applied to both semilinear and quasilinear equations. We also prove linear convergence with mesh-independent error reduction under certain assumptions on the equation. The analysis is carried out on general Lipschitz domains and relies on the theory of nonlinear Steklov--Poincar\'e operators.
This work puts forth low-complexity Riemannian subspace descent algorithms for the minimization of functions over the symmetric positive definite (SPD) manifold. Different from the existing Riemannian gradient descent variants, the proposed approach utilizes carefully chosen subspaces that allow the update to be written as a product of the Cholesky factor of the iterate and a sparse matrix. The resulting updates avoid the costly matrix operations like matrix exponentiation and dense matrix multiplication, which are generally required in almost all other Riemannian optimization algorithms on SPD manifold. We further identify a broad class of functions, arising in diverse applications, such as kernel matrix learning, covariance estimation of Gaussian distributions, maximum likelihood parameter estimation of elliptically contoured distributions, and parameter estimation in Gaussian mixture model problems, over which the Riemannian gradients can be calculated efficiently. The proposed uni-directional and multi-directional Riemannian subspace descent variants incur per-iteration complexities of $O(n)$ and $O(n^2)$ respectively, as compared to the $O(n^3)$ or higher complexity incurred by all existing Riemannian gradient descent variants. The superior runtime and low per-iteration complexity of the proposed algorithms is also demonstrated via numerical tests on large-scale covariance estimation and matrix square root problems. MATLAB code implementation is publicly available on GitHub : //github.com/yogeshd-iitk/subspace_descent_over_SPD_manifold
In this paper we present a mathematical and numerical analysis of an eigenvalue problem associated to the elasticity-Stokes equations stated in two and three dimensions. Both problems are related through the Herrmann pressure. Employing the Babu\v ska--Brezzi theory, it is proved that the resulting continuous and discrete variational formulations are well-posed. In particular, the finite element method is based on general inf-sup stables pairs for the Stokes system, such that, Taylor--Hood finite elements. By using a general approximation theory for compact operators, we obtain optimal order error estimates for the eigenfunctions and a double order for the eigenvalues. Under mild assumptions, we have that these estimates hold with constants independent of the Lam\'e coefficient $\lambda$. In addition, we carry out the reliability and efficiency analysis of a residual-based a posteriori error estimator for the spectral problem. We report a series of numerical tests in order to assess the performance of the method and its behavior when the nearly incompressible case of elasticity is considered.
The aim of this paper is to give a systematic mathematical interpretation of the diffusion problem on which Graph Neural Networks (GNNs) models are based. The starting point of our approach is a dissipative functional leading to dynamical equations which allows us to study the symmetries of the model. We discuss the conserved charges and provide a charge-preserving numerical method for solving the dynamical equations. In any dynamical system and also in GRAph Neural Diffusion (GRAND), knowing the charge values and their conservation along the evolution flow could provide a way to understand how GNNs and other networks work with their learning capabilities.
A major technique in learning-augmented online algorithms is combining multiple algorithms or predictors. Since the performance of each predictor may vary over time, it is desirable to use not the single best predictor as a benchmark, but rather a dynamic combination which follows different predictors at different times. We design algorithms that combine predictions and are competitive against such dynamic combinations for a wide class of online problems, namely, metrical task systems. Against the best (in hindsight) unconstrained combination of $\ell$ predictors, we obtain a competitive ratio of $O(\ell^2)$, and show that this is best possible. However, for a benchmark with slightly constrained number of switches between different predictors, we can get a $(1+\epsilon)$-competitive algorithm. Moreover, our algorithms can be adapted to access predictors in a bandit-like fashion, querying only one predictor at a time. An unexpected implication of one of our lower bounds is a new structural insight about covering formulations for the $k$-server problem.
We describe and analyze a quasi-Trefftz DG method for solving boundary value problems for the homogeneous diffusion-advection-reaction equation with piecewise-smooth coefficients. Trefftz schemes are high-order Galerkin methods whose discrete functions are elementwise exact solutions of the underlying PDE. Trefftz basis functions can be computed for many PDEs that are linear, homogeneous and with piecewise-constant coefficients. However, if the equation has varying coefficients, in general, exact solutions are unavailable, hence the construction of discrete Trefftz spaces is impossible. Quasi-Trefftz methods have been introduced to overcome this limitation, relying on discrete spaces of functions that are elementwise "approximate solutions" of the PDE. A space-time quasi-Trefftz DG method for the acoustic wave equation with smoothly varying coefficients has recently been studied; since it has shown excellent results, we propose a related method that can be applied to second-order elliptic equations. The DG weak formulation is derived using an interior penalty parameter and the upwind numerical fluxes. We choose polynomial quasi-Trefftz basis functions, whose coefficients can be computed with a simple algorithm based on the Taylor expansion of the PDE's coefficients. The main advantage of Trefftz and quasi-Trefftz schemes over more classical ones is the higher accuracy for comparable numbers of degrees of freedom. We prove that the dimension of the quasi-Trefftz space is smaller than the dimension of the full polynomial space of the same degree and that yields the same optimal convergence rates. The quasi-Trefftz DG method is well-posed, consistent and stable and we prove its high-order convergence. We present some numerical experiments in two dimensions that show excellent properties in terms of approximation and convergence rate.
We present a comprehensive analysis of the implications of artificial latency in the Proposer-Builder Separation framework on the Ethereum network. Focusing on the MEV-Boost auction system, we analyze how strategic latency manipulation affects Maximum Extractable Value yields and network integrity. Our findings reveal both increased profitability for node operators and significant systemic challenges, including heightened network inefficiencies and centralization risks. We empirically validates these insights with a pilot that Chorus One has been operating on Ethereum mainnet. We demonstrate the nuanced effects of latency on bid selection and validator dynamics. Ultimately, this research underscores the need for balanced strategies that optimize Maximum Extractable Value capture while preserving the Ethereum network's decentralization ethos.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.