The continuous-time Markov chain (CTMC) is the mathematical workhorse of evolutionary biology. Learning CTMC model parameters using modern, gradient-based methods requires the derivative of the matrix exponential evaluated at the CTMC's infinitesimal generator (rate) matrix. Motivated by the derivative's extreme computational complexity as a function of state space cardinality, recent work demonstrates the surprising effectiveness of a naive, first-order approximation for a host of problems in computational biology. In response to this empirical success, we obtain rigorous deterministic and probabilistic bounds for the error accrued by the naive approximation and establish a "blessing of dimensionality" result that is universal for a large class of rate matrices with random entries. Finally, we apply the first-order approximation within surrogate-trajectory Hamiltonian Monte Carlo for the analysis of the early spread of SARS-CoV-2 across 44 geographic regions that comprise a state space of unprecedented dimensionality for unstructured (flexible) CTMC models within evolutionary biology.
State-of-the-art methods for Bayesian inference in state-space models are (a) conditional sequential Monte Carlo (CSMC) algorithms; (b) sophisticated 'classical' MCMC algorithms like MALA, or mGRAD from Titsias and Papaspiliopoulos (2018, arXiv:1610.09641v3 [stat.ML]). The former propose $N$ particles at each time step to exploit the model's 'decorrelation-over-time' property and thus scale favourably with the time horizon, $T$ , but break down if the dimension of the latent states, $D$, is large. The latter leverage gradient-/prior-informed local proposals to scale favourably with $D$ but exhibit sub-optimal scalability with $T$ due to a lack of model-structure exploitation. We introduce methods which combine the strengths of both approaches. The first, Particle-MALA, spreads $N$ particles locally around the current state using gradient information, thus extending MALA to $T > 1$ time steps and $N > 1$ proposals. The second, Particle-mGRAD, additionally incorporates (conditionally) Gaussian prior dynamics into the proposal, thus extending the mGRAD algorithm to $T > 1$ time steps and $N > 1$ proposals. We prove that Particle-mGRAD interpolates between CSMC and Particle-MALA, resolving the 'tuning problem' of choosing between CSMC (superior for highly informative prior dynamics) and Particle-MALA (superior for weakly informative prior dynamics). We similarly extend other 'classical' MCMC approaches like auxiliary MALA, aGRAD, and preconditioned Crank-Nicolson-Langevin (PCNL) to $T > 1$ time steps and $N > 1$ proposals. In experiments, for both highly and weakly informative prior dynamics, our methods substantially improve upon both CSMC and sophisticated 'classical' MCMC approaches.
In arXiv:2305.03945 [math.NA], a first-order optimization algorithm has been introduced to solve time-implicit schemes of reaction-diffusion equations. In this research, we conduct theoretical studies on this first-order algorithm equipped with a quadratic regularization term. We provide sufficient conditions under which the proposed algorithm and its time-continuous limit converge exponentially fast to a desired time-implicit numerical solution. We show both theoretically and numerically that the convergence rate is independent of the grid size, which makes our method suitable for large-scale problems. The efficiency of our algorithm has been verified via a series of numerical examples conducted on various types of reaction-diffusion equations. The choice of optimal hyperparameters as well as comparisons with some classical root-finding algorithms are also discussed in the numerical section.
We explore three applications of Min-Max-Jump distance (MMJ distance). MMJ-based K-means revises K-means with MMJ distance. MMJ-based Silhouette coefficient revises Silhouette coefficient with MMJ distance. We also tested the Clustering with Neural Network and Index (CNNI) model with MMJ-based Silhouette coefficient. In the last application, we tested using Min-Max-Jump distance for predicting labels of new points, after a clustering analysis of data. Result shows Min-Max-Jump distance achieves good performances in all the three proposed applications. In addition, we devise several algorithms for calculating or estimating the distance.
We propose a two-step Newton's method for refining an approximation of a singular zero whose deflation process terminates after one step, also known as a deflation-one singularity. Given an isolated singular zero of a square analytic system, our algorithm exploits an invertible linear operator obtained by combining the Jacobian and a projection of the Hessian in the direction of the kernel of the Jacobian. We prove the quadratic convergence of the two-step Newton method when it is applied to an approximation of a deflation-one singular zero. Also, the algorithm requires a smaller size of matrices than the existing methods, making it more efficient. We demonstrate examples and experiments to show the efficiency of the method.
We present a novel combination of dynamic embedded topic models and change-point detection to explore diachronic change of lexical semantic modality in classical and early Christian Latin. We demonstrate several methods for finding and characterizing patterns in the output, and relating them to traditional scholarship in Comparative Literature and Classics. This simple approach to unsupervised models of semantic change can be applied to any suitable corpus, and we conclude with future directions and refinements aiming to allow noisier, less-curated materials to meet that threshold.
The celebrated Kleene fixed point theorem is crucial in the mathematical modelling of recursive specifications in Denotational Semantics. In this paper we discuss whether the hypothesis of the aforementioned result can be weakened. An affirmative answer to the aforesaid inquiry is provided so that a characterization of those properties that a self-mapping must satisfy in order to guarantee that its set of fixed points is non-empty when no notion of completeness are assumed to be satisfied by the partially ordered set. Moreover, the case in which the partially ordered set is coming from a quasi-metric space is treated in depth. Finally, an application of the exposed theory is obtained. Concretely, a mathematical method to discuss the asymptotic complexity of those algorithms whose running time of computing fulfills a recurrence equation is presented. Moreover, the aforesaid method retrieves the fixed point based methods that appear in the literature for asymptotic complexity analysis of algorithms. However, our new method improves the aforesaid methods because it imposes fewer requirements than those that have been assumed in the literature and, in addition, it allows to state simultaneously upper and lower asymptotic bounds for the running time computing.
Right-linear (or left-linear) grammars are a well-known class of context-free grammars computing just the regular languages. They may naturally be written as expressions with (least) fixed points but with products restricted to letters as left arguments, giving an alternative to the syntax of regular expressions. In this work, we investigate the resulting logical theory of this syntax. Namely, we propose a theory of right-linear algebras (RLA) over of this syntax and a cyclic proof system CRLA for reasoning about them. We show that CRLA is sound and complete for the intended model of regular languages. From here we recover the same completeness result for RLA by extracting inductive invariants from cyclic proofs, rendering the model of regular languages the free right-linear algebra. Finally, we extend system CRLA by greatest fixed points, nuCRLA, naturally modelled by languages of omega-words thanks to right-linearity. We show a similar soundness and completeness result of (the guarded fragment of) nuCRLA for the model of omega-regular languages, employing game theoretic techniques.
In this note we consider the approximation of the Greeks Delta and Gamma of American-style options through the numerical solution of time-dependent partial differential complementarity problems (PDCPs). This approach is very attractive as it can yield accurate approximations to these Greeks at essentially no additional computational cost during the numerical solution of the PDCP for the pertinent option value function. For the temporal discretization, the Crank-Nicolson method is arguably the most popular method in computational finance. It is well-known, however, that this method can have an undesirable convergence behaviour in the approximation of the Greeks Delta and Gamma for American-style options, even when backward Euler damping (Rannacher smoothing) is employed. In this note we study for the temporal discretization an interesting family of diagonally implicit Runge-Kutta (DIRK) methods together with the two-stage Lobatto IIIC method. Through ample numerical experiments for one- and two-asset American-style options, it is shown that these methods can yield a regular second-order convergence behaviour for the option value as well as for the Greeks Delta and Gamma. A mutual comparison reveals that the DIRK method with suitably chosen parameter $\theta$ is preferable.
Remotely sensed data are dominated by mixed Land Use and Land Cover (LULC) types. Spectral unmixing (SU) is a key technique that disentangles mixed pixels into constituent LULC types and their abundance fractions. While existing studies on Deep Learning (DL) for SU typically focus on single time-step hyperspectral (HS) or multispectral (MS) data, our work pioneers SU using MODIS MS time series, addressing missing data with end-to-end DL models. Our approach enhances a Long-Short Term Memory (LSTM)-based model by incorporating geographic, topographic (geo-topographic), and climatic ancillary information. Notably, our method eliminates the need for explicit endmember extraction, instead learning the input-output relationship between mixed spectra and LULC abundances through supervised learning. Experimental results demonstrate that integrating spectral-temporal input data with geo-topographic and climatic information significantly improves the estimation of LULC abundances in mixed pixels. To facilitate this study, we curated a novel labeled dataset for Andalusia (Spain) with monthly MODIS multispectral time series at 460m resolution for 2013. Named Andalusia MultiSpectral MultiTemporal Unmixing (Andalusia-MSMTU), this dataset provides pixel-level annotations of LULC abundances along with ancillary information. The dataset (//zenodo.org/records/7752348) and code (//github.com/jrodriguezortega/MSMTU) are available to the public.
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XAI must be done empirically, on a case-by-case basis, which prevents systematic theory-building in XAI. We propose a psychological theory of how humans draw conclusions from saliency maps, the most common form of XAI explanation, which for the first time allows for precise prediction of explainee inference conditioned on explanation. Our theory posits that absent explanation humans expect the AI to make similar decisions to themselves, and that they interpret an explanation by comparison to the explanations they themselves would give. Comparison is formalized via Shepard's universal law of generalization in a similarity space, a classic theory from cognitive science. A pre-registered user study on AI image classifications with saliency map explanations demonstrate that our theory quantitatively matches participants' predictions of the AI.