The main objective of this paper is to introduce unique representations and characterizations for the weighted core inverse of matrices. We also investigate various properties of these inverses and their relationships with other generalized inverses. Proposed representations of the matrix-weighted core inverse will help us to discuss some results associated with the reverse order law for these inverses. Furthermore, this paper introduces an extension of the concepts of generalized bilateral inverse and $\{1,2,3,1^k\}$-inverse and their respective dual for complex rectangular matrices. Furthermore, we establish characterizations of EP-ness and the condition when both $W$-weighted $\{1,2,3\}$ and $W$-weighted $\{1,2,3,1^k\}$ inverses coincide. Then, a W-weighted index-MP, W-weighted MP-index, and W-weighted MP-index-MP matrices for rectangular complex matrices is introduced. In addition, we define the dual inverses for both weighted bilateral inverses and $\{1,2,3,1^k\}$-inverse. Characteristics that lead to self-duality in weighted bilateral inverses are also examined.
Entropy measures quantify the amount of information and correlation present in a quantum system. In practice, when the quantum state is unknown and only copies thereof are available, one must resort to the estimation of such entropy measures. Here we propose a variational quantum algorithm for estimating the von Neumann and R\'enyi entropies, as well as the measured relative entropy and measured R\'enyi relative entropy. Our approach first parameterizes a variational formula for the measure of interest by a quantum circuit and a classical neural network, and then optimizes the resulting objective over parameter space. Numerical simulations of our quantum algorithm are provided, using a noiseless quantum simulator. The algorithm provides accurate estimates of the various entropy measures for the examples tested, which renders it as a promising approach for usage in downstream tasks.
We develop the formal theory of monads, as established by Street, in univalent foundations. This allows us to formally reason about various kinds of monads on the right level of abstraction. In particular, we define the bicategory of monads internal to a bicategory, and prove that it is univalent. We also define Eilenberg-Moore objects, and we show that both Eilenberg-Moore categories and Kleisli categories give rise to Eilenberg-Moore objects. Finally, we relate monads and adjunctions in arbitrary bicategories. Our work is formalized in Coq using the UniMath library.
Maximum likelihood estimation (MLE) is a fundamental problem in statistics. Characteristics of the MLE problem for algebraic statistical models are reflected in the geometry of the \textit{likelihood correspondence}, a variety that ties together data and their maximum likelihood estimators. We construct the ideal of the likelihood correspondence for the large class of toric models and find a Gr\"{o}bner basis in the case of complete and joint independence models arising from multi-way contingency tables. These results provide insight into their properties and offer faster computational strategies for solving the MLE problem.
Estimating the statistics of the state of a dynamical system, from partial and noisy observations, is both mathematically challenging and finds wide application. Furthermore, the applications are of great societal importance, including problems such as probabilistic weather forecasting and prediction of epidemics. Particle filters provide a well-founded approach to the problem, leading to provably accurate approximations of the statistics. However these methods perform poorly in high dimensions. In 1994 the idea of ensemble Kalman filtering was introduced by Evensen, leading to a methodology that has been widely adopted in the geophysical sciences and also finds application to quite general inverse problems. However, ensemble Kalman filters have defied rigorous analysis of their statistical accuracy, except in the linear Gaussian setting. In this article we describe recent work which takes first steps to analyze the statistical accuracy of ensemble Kalman filters beyond the linear Gaussian setting. The subject is inherently technical, as it involves the evolution of probability measures according to a nonlinear and nonautonomous dynamical system; and the approximation of this evolution. It can nonetheless be presented in a fairly accessible fashion, understandable with basic knowledge of dynamical systems, numerical analysis and probability.
This paper discusses two approaches to the diachronic normalization of Polish texts: a rule-based solution that relies on a set of handcrafted patterns, and a neural normalization model based on the text-to-text transfer transformer architecture. The training and evaluation data prepared for the task are discussed in detail, along with experiments conducted to compare the proposed normalization solutions. A quantitative and qualitative analysis is made. It is shown that at the current stage of inquiry into the problem, the rule-based solution outperforms the neural one on 3 out of 4 variants of the prepared dataset, although in practice both approaches have distinct advantages and disadvantages.
In the Big Data era, with the ubiquity of geolocation sensors in particular, massive datasets exhibiting a possibly complex spatial dependence structure are becoming increasingly available. In this context, the standard probabilistic theory of statistical learning does not apply directly and guarantees of the generalization capacity of predictive rules learned from such data are left to establish. We analyze here the simple Kriging task from a statistical learning perspective, i.e. by carrying out a nonparametric finite-sample predictive analysis. Given $d\geq 1$ values taken by a realization of a square integrable random field $X=\{X_s\}_{s\in S}$, $S\subset \mathbb{R}^2$, with unknown covariance structure, at sites $s_1,\; \ldots,\; s_d$ in $S$, the goal is to predict the unknown values it takes at any other location $s\in S$ with minimum quadratic risk. The prediction rule being derived from a training spatial dataset: a single realization $X'$ of $X$, independent from those to be predicted, observed at $n\geq 1$ locations $\sigma_1,\; \ldots,\; \sigma_n$ in $S$. Despite the connection of this minimization problem with kernel ridge regression, establishing the generalization capacity of empirical risk minimizers is far from straightforward, due to the non independent and identically distributed nature of the training data $X'_{\sigma_1},\; \ldots,\; X'_{\sigma_n}$ involved in the learning procedure. In this article, non-asymptotic bounds of order $O_{\mathbb{P}}(1/\sqrt{n})$ are proved for the excess risk of a plug-in predictive rule mimicking the true minimizer in the case of isotropic stationary Gaussian processes, observed at locations forming a regular grid in the learning stage. These theoretical results are illustrated by various numerical experiments, on simulated data and on real-world datasets.
The Sinkhorn algorithm is the state-of-the-art to approximate solutions of entropic optimal transport (OT) distances between discrete probability distributions. We show that meticulously training a neural network to learn initializations to the algorithm via the entropic OT dual problem can significantly speed up convergence, while maintaining desirable properties of the Sinkhorn algorithm, such as differentiability and parallelizability. We train our predictive network in an adversarial fashion using a second, generating network and a self-supervised bootstrapping loss. The predictive network is universal in the sense that it is able to generalize to any pair of distributions of fixed dimension and cost at inference, and we prove that we can make the generating network universal in the sense that it is capable of producing any pair of distributions during training. Furthermore, we show that our network can even be used as a standalone OT solver to approximate regularized transport distances to a few percent error, which makes it the first meta neural OT solver.
The automorphism groups of various linear codes are well-studied yielding valuable insights into the respective code structure. This knowledge is successfully applied in, e.g., theoretical analysis and in improving decoding performance motivating the analyses of endomorphisms of linear codes. In this work, we discuss the structure of the set of transformation matrices of code endomorphisms, defined as a generalization of code automorphisms, and provide an explicit construction of a bijective mapping between the image of an endomorphism and its canonical quotient space. Furthermore, we introduce a one-to-one mapping between the set of transformation matrices of endomorphisms and a larger linear block code enabling the use of well-known algorithms for the search for suitable endomorphisms. Additionally, we propose an approach to obtain unknown code endomorphisms based on automorphisms of the code. Furthermore, we consider ensemble decoding as a possible use case for endomorphisms by introducing endomorphism ensemble decoding. Interestingly, EED can improve decoding performance when other ensemble decoding schemes are not applicable.
A process algebra is proposed, whose semantics maps a term to a nondeterministic finite automaton (NFA, for short). We prove a representability theorem: for each NFA $N$, there exists a process algebraic term $p$ such that its semantics is an NFA isomorphic to $N$. Moreover, we provide a concise axiomatization of language equivalence: two NFAs $N_1$ and $N_2$ recognize the same language if and only if the associated terms $p_1$ and $p_2$, respectively, can be equated by means of a set of axioms, comprising 7 axioms plus 3 conditional axioms, only.
Large Language Models (LLMs) have shown excellent generalization capabilities that have led to the development of numerous models. These models propose various new architectures, tweaking existing architectures with refined training strategies, increasing context length, using high-quality training data, and increasing training time to outperform baselines. Analyzing new developments is crucial for identifying changes that enhance training stability and improve generalization in LLMs. This survey paper comprehensively analyses the LLMs architectures and their categorization, training strategies, training datasets, and performance evaluations and discusses future research directions. Moreover, the paper also discusses the basic building blocks and concepts behind LLMs, followed by a complete overview of LLMs, including their important features and functions. Finally, the paper summarizes significant findings from LLM research and consolidates essential architectural and training strategies for developing advanced LLMs. Given the continuous advancements in LLMs, we intend to regularly update this paper by incorporating new sections and featuring the latest LLM models.