We propose two approaches, based on Riemannian optimization, for computing a stochastic approximation of the $p$th root of a stochastic matrix $A$. In the first approach, the approximation is found in the Riemannian manifold of positive stochastic matrices. In the second approach, we introduce the Riemannian manifold of positive stochastic matrices sharing with $A$ the Perron eigenvector and we compute the approximation of the $p$th root of $A$ in such a manifold. This way, differently from the available methods based on constrained optimization, $A$ and its $p$th root approximation share the Perron eigenvector. Such a property is relevant, from a modelling point of view, in the embedding problem for Markov chains. The extended numerical experimentation shows that, in the first approach, the Riemannian optimization methods are generally faster and more accurate than the available methods based on constrained optimization. In the second approach, even though the stochastic approximation of the $p$th root is found in a smaller set, the approximation is generally more accurate than the one obtained by standard constrained optimization.
We propose a Bernoulli-barycentric rational matrix collocation method for two-dimensional evolutionary partial differential equations (PDEs) with variable coefficients that combines Bernoulli polynomials with barycentric rational interpolations in time and space, respectively. The theoretical accuracy $O\left((2\pi)^{-N}+h_x^{d_x-1}+h_y^{d_y-1}\right)$ of our numerical scheme is proven, where $N$ is the number of basis functions in time, $h_x$ and $h_y$ are the grid sizes in the $x$, $y$-directions, respectively, and $0\leq d_x\leq \frac{b-a}{h_x},~0\leq d_y\leq\frac{d-c}{h_y}$. For the efficient solution of the relevant linear system arising from the discretizations, we introduce a class of dimension expanded preconditioners that take the advantage of structural properties of the coefficient matrices, and we present a theoretical analysis of eigenvalue distributions of the preconditioned matrices. The effectiveness of our proposed method and preconditioners are studied for solving some real-world examples represented by the heat conduction equation, the advection-diffusion equation, the wave equation and telegraph equations.
The research presents the closed-form outage analysis of the newly presented $\alpha$-modification of the shadowed Beaulieu-Xie fading model for wireless communications. For the considered channel, the closed-form analytical expressions for the outage probability (including its upper and lower bounds), raw moments, amount of fading, and channel quality estimation indicator are derived. The carried out thorough numerical simulation and analysis demonstrates strong agreement with the presented closed-form solutions and illustrates the relationship between the outage probability and channel parameters.
In the realm of statistical exploration, the manipulation of pseudo-random values to discern their impact on data distribution presents a compelling avenue of inquiry. This article investigates the question: Is it possible to add pseudo-random values without compelling a shift towards a normal distribution?. Employing Python techniques, the study explores the nuances of pseudo-random value addition within the context of additions, aiming to unravel the interplay between randomness and resulting statistical characteristics. The Materials and Methods chapter details the construction of datasets comprising up to 300 billion pseudo-random values, employing three distinct layers of manipulation. The Results chapter visually and quantitatively explores the generated datasets, emphasizing distribution and standard deviation metrics. The study concludes with reflections on the implications of pseudo-random value manipulation and suggests avenues for future research. In the layered exploration, the first layer introduces subtle normalization with increasing summations, while the second layer enhances normality. The third layer disrupts typical distribution patterns, leaning towards randomness despite pseudo-random value summation. Standard deviation patterns across layers further illuminate the dynamic interplay of pseudo-random operations on statistical characteristics. While not aiming to disrupt academic norms, this work modestly contributes insights into data distribution complexities. Future studies are encouraged to delve deeper into the implications of data manipulation on statistical outcomes, extending the understanding of pseudo-random operations in diverse contexts.
We introduce off-policy distributional Q($\lambda$), a new addition to the family of off-policy distributional evaluation algorithms. Off-policy distributional Q($\lambda$) does not apply importance sampling for off-policy learning, which introduces intriguing interactions with signed measures. Such unique properties distributional Q($\lambda$) from other existing alternatives such as distributional Retrace. We characterize the algorithmic properties of distributional Q($\lambda$) and validate theoretical insights with tabular experiments. We show how distributional Q($\lambda$)-C51, a combination of Q($\lambda$) with the C51 agent, exhibits promising results on deep RL benchmarks.
This paper addresses the multiple two-sample test problem in a graph-structured setting, which is a common scenario in fields such as Spatial Statistics and Neuroscience. Each node $v$ in fixed graph deals with a two-sample testing problem between two node-specific probability density functions (pdfs), $p_v$ and $q_v$. The goal is to identify nodes where the null hypothesis $p_v = q_v$ should be rejected, under the assumption that connected nodes would yield similar test outcomes. We propose the non-parametric collaborative two-sample testing (CTST) framework that efficiently leverages the graph structure and minimizes the assumptions over $p_v$ and $q_v$. Our methodology integrates elements from f-divergence estimation, Kernel Methods, and Multitask Learning. We use synthetic experiments and a real sensor network detecting seismic activity to demonstrate that CTST outperforms state-of-the-art non-parametric statistical tests that apply at each node independently, hence disregard the geometry of the problem.
The functional logit regression model was proposed by Escabias et al. (2004) with the objective of modeling a scalar binary response variable from a functional predictor. The model estimation proposed in that case was performed in a subspace of L2(T) of squared integrable functions of finite dimension, generated by a finite set of basis functions. For that estimation it was assumed that the curves of the functional predictor and the functional parameter of the model belong to the same finite subspace. The estimation so obtained was affected by high multicollinearity problems and the solution given to these problems was based on different functional principal component analysis. The logitFD package introduced here provides a toolbox for the fit of these models by implementing the different proposed solutions and by generalizing the model proposed in 2004 to the case of several functional and non-functional predictors. The performance of the functions is illustrated by using data sets of functional data included in the fda.usc package from R-CRAN.
We give a deterministic polynomial time algorithm to compute the endomorphism ring of a supersingular elliptic curve in characteristic p, provided that we are given two noncommuting endomorphisms and the factorization of the discriminant of the ring $\mathcal{O}_0$ they generate. At each prime $q$ for which $\mathcal{O}_0$ is not maximal, we compute the endomorphism ring locally by computing a q-maximal order containing it and, when $q \neq p$, recovering a path to $\text{End}(E) \otimes \mathbb{Z}_q$ in the Bruhat-Tits tree. We use techniques of higher-dimensional isogenies to navigate towards the local endomorphism ring. Our algorithm improves on a previous algorithm which requires a restricted input and runs in subexponential time under certain heuristics. Page and Wesolowski give a probabilistic polynomial time algorithm to compute the endomorphism ring on input of a single non-scalar endomorphism. Beyond using techniques of higher-dimensional isogenies to divide endomorphisms by a scalar, our methods are completely different.
The detection of echolocation clicks is key in understanding the intricate behaviors of cetaceans and monitoring their populations. Cetacean species relying on clicks for navigation, foraging and even communications are sperm whales (Physeter macrocephalus) and a variety of dolphin groups. Echolocation clicks are wideband signals of short duration that are often emitted in sequences of varying inter-click-intervals. While datasets and models for clicks exist, the detection and classification of clicks present a significant challenge, mostly due to the diversity of clicks' structures, overlapping signals from simultaneously emitting animals, and the abundance of noise transients from, for example, snapping shrimps and shipping cavitation noise. This paper provides a survey of the many detection and classification methodologies of clicks, ranging from 2002 to 2023. We divide the surveyed techniques into categories by their methodology. Specifically, feature analysis (e.g., phase, ICI and duration), frequency content, energy based detection, supervised and unsupervised machine learning, template matching and adaptive detection approaches. Also surveyed are open access platforms for click detections, and databases openly available for testing. Details of the method applied for each paper are given along with advantages and limitations, and for each category we analyze the remaining challenges. The paper also includes a performance comparison for several schemes over a shared database. Finally, we provide tables summarizing the existing detection schemes in terms of challenges address, methods, detection and classification tools applied, features used and applications.
We present a graph-based discretization method for solving hyperbolic systems of conservation laws using discontinuous finite elements. The method is based on the convex limiting technique technique introduced by Guermond et al. (SIAM J. Sci. Comput. 40, A3211--A3239, 2018). As such, these methods are mathematically guaranteed to be invariant-set preserving and to satisfy discrete pointwise entropy inequalities. In this paper we extend the theory for the specific case of discontinuous finite elements, incorporating the effect of boundary conditions into the formulation. From a practical point of view, the implementation of these methods is algebraic, meaning, that they operate directly on the stencil of the spatial discretization. This first paper in a sequence of two papers introduces and verifies essential building blocks for the convex limiting procedure using discontinuous Galerkin discretizations. In particular, we discuss a minimally stabilized high-order discontinuous Galerkin method that exhibits optimal convergence rates comparable to linear stabilization techniques for cell-based methods. In addition, we discuss a proper choice of local bounds for the convex limiting procedure. A follow-up contribution will focus on the high-performance implementation, benchmarking and verification of the method. We verify convergence rates on a sequence of one- and two-dimensional tests with differing regularity. In particular, we obtain optimal convergence rates for single rarefaction waves. We also propose a simple test in order to verify the implementation of boundary conditions and their convergence rates.
This paper describes a geometrical method for finding the roots $r_1$, $r_2$ of a quadratic equation in one complex variable of the form $x^2+c_1 x+c_2=0$, by means of a Line $L$ and a Circumference $C$ in the complex plane, constructed from known coefficients $c_1$, $c_2$. This Line-Circumference (LC) geometric structure contains the sought roots $r_1$, $r_2$ at the intersections of its component elements $L$ and $C$. Line $L$ in the LC structure is mapped onto Circumference $C$ by a Mobius transformation. The location and inclination angle of Line $L$ can be computed directly from coefficients $c_1$, $c_2$, while Circumference $C$ is constructed by dividing the constant term $c_2$ by each point from Line $L$. This paper describes and develops the technical details for the LC Method, and then shows how the LC Method works through a numerical example. The quadratic LC method described here can be extended to polynomials in one variable of degree greater than two, in order to find initial approximations to their roots. As an additional feature, this paper also studies an interesting property of the rectilinear segments connecting key points in a quadratic LC structure.