We propose a fast method for computing the eigenvalue decomposition of a dense real normal matrix $A$. The method leverages algorithms that are known to be efficiently implemented, such as the bidiagonal singular value decomposition and the symmetric eigenvalue decomposition. For symmetric and skew-symmetric matrices, the method reduces to calling the latter, so that its advantages are for orthogonal matrices mostly and, potentially, any other normal matrix. The method relies on the real Schur decomposition of the skew-symmetric part of $A$. To obtain the eigenvalue decomposition of the normal matrix $A$, additional steps depending on the distribution of the eigenvalues are required. We provide a complexity analysis of the method and compare its numerical performance with existing algorithms. In most cases, the method is as fast as obtaining the Hessenberg factorization of a dense matrix. Finally, we evaluate the method's accuracy and provide experiments for the application of a Karcher mean on the special orthogonal group.
We study the strong approximation of the solutions to singular stochastic kinetic equations (also referred to as second-order SDEs) driven by $\alpha$-stable processes, using an Euler-type scheme inspired by [11]. For these equations, the stability index $\alpha$ lies in the range $(1,2)$, and the drift term exhibits anisotropic $\beta$-H\"older continuity with $\beta >1 - \frac{\alpha}{2}$. We establish a convergence rate of $(\frac{1}{2} + \frac{\beta}{\alpha(1+\alpha)} \wedge \frac{1}{2})$, which aligns with the results in [4] concerning first-order SDEs.
We consider an unknown multivariate function representing a system-such as a complex numerical simulator-taking both deterministic and uncertain inputs. Our objective is to estimate the set of deterministic inputs leading to outputs whose probability (with respect to the distribution of the uncertain inputs) of belonging to a given set is less than a given threshold. This problem, which we call Quantile Set Inversion (QSI), occurs for instance in the context of robust (reliability-based) optimization problems, when looking for the set of solutions that satisfy the constraints with sufficiently large probability. To solve the QSI problem we propose a Bayesian strategy, based on Gaussian process modeling and the Stepwise Uncertainty Reduction (SUR) principle, to sequentially choose the points at which the function should be evaluated to efficiently approximate the set of interest. We illustrate the performance and interest of the proposed SUR strategy through several numerical experiments.
This work is concerned with the computation of the first-order variation for one-dimensional hyperbolic partial differential equations. In the case of shock waves the main challenge is addressed by developing a numerical method to compute the evolution of the generalized tangent vector introduced by Bressan and Marson (1995). Our basic strategy is to combine the conservative numerical schemes and a novel expression of the interface conditions for the tangent vectors along the discontinuity. Based on this, we propose a simple numerical method to compute the tangent vectors for general hyperbolic systems. Numerical results are presented for Burgers' equation and a 2 x 2 hyperbolic system with two genuinely nonlinear fields.
This paper investigates the pathwise uniform convergence in probability of fully discrete finite-element approximations for the two-dimensional stochastic Navier-Stokes equations with multiplicative noise, subject to no-slip boundary conditions. Assuming Lipschitz-continuous diffusion coefficients and under mild conditions on the initial data, we establish that the full discretization achieves linear convergence in space and nearly half-order convergence in time.
The accurate computational study of wavepacket nuclear dynamics is considered to be a classically intractable problem, particularly with increasing dimensions. Here we present two algorithms that, in conjunction with other methods developed by us, will form the basis for performing quantum nuclear dynamics in arbitrary dimensions. For one algorithm, we present a direct map between the Born-Oppenheimer Hamiltonian describing the wavepacket time-evolution and the control parameters of a spin-lattice Hamiltonian that describes the dynamics of qubit states in an ion-trap quantum computer. This map is exact for three qubits, and when implemented, the dynamics of the spin states emulate those of the nuclear wavepacket. However, this map becomes approximate as the number of qubits grow. In a second algorithm we present a general quantum circuit decomposition formalism for such problems using a method called the Quantum Shannon Decomposition. This algorithm is more robust and is exact for any number of qubits, at the cost of increased circuit complexity. The resultant circuit is implemented on IBM's quantum simulator (QASM) for 3-7 qubits. In both cases the wavepacket dynamics is found to be in good agreement with the classical result and the corresponding vibrational frequencies obtained from the wavepacket density time-evolution, are in agreement to within a few tenths of a wavenumbers.
The construction of loss functions presents a major challenge in data-driven modeling involving weak-form operators in PDEs and gradient flows, particularly due to the need to select test functions appropriately. We address this challenge by introducing self-test loss functions, which employ test functions that depend on the unknown parameters, specifically for cases where the operator depends linearly on the unknowns. The proposed self-test loss function conserves energy for gradient flows and coincides with the expected log-likelihood ratio for stochastic differential equations. Importantly, it is quadratic, facilitating theoretical analysis of identifiability and well-posedness of the inverse problem, while also leading to efficient parametric or nonparametric regression algorithms. It is computationally simple, requiring only low-order derivatives or even being entirely derivative-free, and numerical experiments demonstrate its robustness against noisy and discrete data.
We show that one-way functions exist if and only there exists an efficient distribution relative to which almost-optimal compression is hard on average. The result is obtained by combining a theorem of Ilango, Ren, and Santhanam and one by Bauwens and Zimand.
We carry out a stability and convergence analysis for the fully discrete scheme obtained by combining a finite or virtual element spatial discretization with the upwind-discontinuous Galerkin time-stepping applied to the time-dependent advection-diffusion equation. A space-time streamline-upwind Petrov-Galerkin term is used to stabilize the method. More precisely, we show that the method is inf-sup stable with constant independent of the diffusion coefficient, which ensures the robustness of the method in the convection- and diffusion-dominated regimes. Moreover, we prove optimal convergence rates in both regimes for the error in the energy norm. An important feature of the presented analysis is the control in the full $L^2(0,T;L^2(\Omega))$ norm without the need of introducing an artificial reaction term in the model. We finally present some numerical experiments in $(3 + 1)$-dimensions that validate our theoretical results.
We develop and analyze stochastic inexact Gauss-Newton methods for nonlinear least-squares problems and inexact Newton methods for nonlinear systems of equations. Random models are formed using suitable sampling strategies for the matrices involved in the deterministic models. The analysis of the expected number of iterations needed in the worst case to achieve a desired level of accuracy in the first-order optimality condition provides guidelines for applying sampling and enforcing, with fixed probability, a suitable accuracy in the random approximations. Results of the numerical validation of the algorithms are presented.
Monitoring wastewater concentrations of SARS-CoV-2 yields a low-cost, noninvasive method for tracking disease prevalence and provides early warning signs of upcoming outbreaks in the serviced communities. There is tremendous clinical and public health interest in understanding the exact dynamics between wastewater viral loads and infection rates in the population. As both data sources may contain substantial noise and missingness, in addition to spatial and temporal dependencies, properly modeling this relationship must address these numerous complexities simultaneously while providing interpretable and clear insights. We propose a novel Bayesian functional concurrent regression model that accounts for both spatial and temporal correlations while estimating the dynamic effects between wastewater concentrations and positivity rates over time. We explicitly model the time lag between the two series and provide full posterior inference on the possible delay between spikes in wastewater concentrations and subsequent outbreaks. We estimate a time lag likely between 5 to 11 days between spikes in wastewater levels and reported clinical positivity rates. Additionally, we find a dynamic relationship between wastewater concentration levels and the strength of its association with positivity rates that fluctuates between outbreaks and non-outbreaks.