Finite-state dimension, introduced early in this century as a finite-state version of classical Hausdorff dimension, is a quantitative measure of the lower asymptotic density of information in an infinite sequence over a finite alphabet, as perceived by finite automata. Finite-state dimension is a robust concept that now has equivalent formulations in terms of finite-state gambling, lossless finite-state data compression, finite-state prediction, entropy rates, and automatic Kolmogorov complexity. The Schnorr-Stimm dichotomy theorem gave the first automata-theoretic characterization of normal sequences, which had been studied in analytic number theory since Borel defined them. This theorem implies that a sequence (or a real number having this sequence as its base-b expansion) is normal if and only if it has finite-state dimension 1. One of the most powerful classical tools for investigating normal numbers is the Weyl criterion, which characterizes normality in terms of exponential sums. Such sums are well studied objects with many connections to other aspects of analytic number theory, and this has made use of Weyl criterion especially fruitful. This raises the question whether Weyl criterion can be generalized from finite-state dimension 1 to arbitrary finite-state dimensions, thereby making it a quantitative tool for studying data compression, prediction, etc. This paper does exactly this. We extend the Weyl criterion from a characterization of sequences with finite-state dimension 1 to a criterion that characterizes every finite-state dimension. This turns out not to be a routine generalization of the original Weyl criterion. Even though exponential sums may diverge for non-normal numbers, finite-state dimension can be characterized in terms of the dimensions of the subsequence limits of the exponential sums. We demonstrate the utility of our criterion though examples.
In the study of sparse stochastic block models (SBMs) one often needs to analyze a distributional recursion, known as the belief propagation (BP) recursion. Uniqueness of the fixed point of this recursion implies several results about the SBM, including optimal recovery algorithms for SBM (Mossel et al. (2016)) and SBM with side information (Mossel and Xu (2016)), and a formula for SBM mutual information (Abbe et al. (2021)). The 2-community case corresponds to an Ising model, for which Yu and Polyanskiy (2022) established uniqueness for all cases. In this paper we analyze the $q$-ary Potts model, i.e., broadcasting of $q$-ary spins on a Galton-Watson tree with expected offspring degree $d$ through Potts channels with second-largest eigenvalue $\lambda$. We allow the intermediate vertices to be observed through noisy channels (side information). We prove that BP uniqueness holds with and without side information when $d\lambda^2 \ge 1 + C \max\{\lambda, q^{-1}\}\log q$ for some absolute constant $C>0$ independent of $q,\lambda,d$. For large $q$ and $\lambda = o(1/\log q)$, this is asymptotically achieving the Kesten-Stigum threshold $d\lambda^2=1$. These results imply mutual information formulas and optimal recovery algorithms for the $q$-community SBM in the corresponding ranges. For $q\ge 4$, Sly (2011); Mossel et al. (2022) showed that there exist choices of $q,\lambda,d$ below Kesten-Stigum (i.e. $d\lambda^2 < 1$) but reconstruction is possible. Somewhat surprisingly, we show that in such regimes BP uniqueness does not hold at least in the presence of weak side information. Our technical tool is a theory of $q$-ary symmetric channels, that we initiate here, generalizing the classical and widely-utilized information-theoretic characterization of BMS (binary memoryless symmetric) channels.
This work was originally published by the author in 1999 in a book [1] and later became part of the author's doctoral thesis in 1999 [2]. Since the original language of these works is not English, the author provides a translation of the key ideas of these publications in this work. In addition, the chapter related to numerical experiments was recalculated on modern computers and using contemporary benchmark datasets. This article presents a novel approach to solving Hartree-Fock equations using Toeplitz and tensor matrices and bases based on regular finite elements. The issues discussed include the choice of basis, the dependence of data volume and number of arithmetic operations on the number of basis functions, as well as the arithmetic complexity and accuracy of computing two- and four-center integrals. The approach has been implemented in a software package, and results have been obtained that are in good agreement with theory.
This paper examines asymmetric and time-varying dependency structures between financial returns, using a novel approach consisting of a combination of regime-switching models and the local Gaussian correlation (LGC). We propose an LGC-based bootstrap test for whether the dependence structure in financial returns across different regimes is equal. We examine this test in a Monte Carlo study, where it shows good level and power properties. We argue that this approach is more intuitive than competing approaches, typically combining regime-switching models with copula theory. Furthermore, the LGC is a semi-parametric approach, hence avoids any parametric specification of the dependence structure. We illustrate our approach using returns from the US-UK stock markets and the US stock and government bond markets. Using a two-regime model for the US-UK stock returns, the test rejects equality of the dependence structure in the two regimes. Furthermore, we find evidence of lower tail dependence in the regime associated with financial downturns in the LGC structure. For a three-regime model fitted to US stock and bond returns, the test rejects equality of the dependence structures between all regime pairs. Furthermore, we find that the LGC has a primarily positive relationship in the time period 1980-2000, mostly a negative relationship from 2000 and onwards. In addition, the regime associated with bear markets indicates less, but asymmetric dependence, clearly documenting the loss of diversification benefits in times of crisis.
Microscopy images are usually analyzed qualitatively or manually and there is a need for autonomous quantitative analysis of objects. In this paper, we present a physics-based computational model for accurate segmentation and geometrical analysis of one-dimensional irregular and deformable objects from microscopy images. This model, named Nano1D, has four steps of preprocessing, segmentation, separating overlapped objects and geometrical measurements. The model is tested on Ag nanowires, and successfully segments and analyzes their geometrical characteristics including length, width and distributions. The function of the algorithm is not undermined by the size, number, density, orientation and overlapping of objects in images. The main strength of the model is shown to be its ability to segment and analyze overlapping objects successfully with more than 99% accuracy, while current machine learning and computational models suffer from inaccuracy and inability to segment overlapping objects. Nano1D can analyze one-dimensional (1D) nanoparticles including nanowires, nanotubes, nanorods in addition to other 1D features of microstructures like microcracks, dislocations etc.
Classification is a classic problem but encounters lots of challenges when dealing with a large number of features, which is common in many modern applications, such as identifying tumor sub-types from genomic data or categorizing customer attitudes based on on-line reviews. We propose a new framework that utilizes the ranks of pairwise distances among observations and identifies a common pattern under moderate to high dimensions that has been overlooked before. The proposed method exhibits superior classification power over existing methods under a variety of scenarios. Furthermore, the proposed method can be applied to non-Euclidean data objects, such as network data. We illustrate the method through an analysis of Neuropixels data where neurons are classified based on their firing activities. Additionally, we explore a related approach that is simpler to understand and investigates key quantities that play essential roles in our novel approach.
With some regularity conditions maximum likelihood estimators (MLEs) always produce asymptotically optimal (in the sense of consistency, efficiency, sufficiency, and unbiasedness) estimators. But in general, the MLEs lead to non-robust statistical inference, for example, pricing models and risk measures. Actuarial claim severity is continuous, right-skewed, and frequently heavy-tailed. The data sets that such models are usually fitted to contain outliers that are difficult to identify and separate from genuine data. Moreover, due to commonly used actuarial "loss control strategies" in financial and insurance industries, the random variables we observe and wish to model are affected by truncation (due to deductibles), censoring (due to policy limits), scaling (due to coinsurance proportions) and other transformations. To alleviate the lack of robustness of MLE-based inference in risk modeling, here in this paper, we propose and develop a new method of estimation - method of truncated moments (MTuM) and generalize it for different scenarios of loss control mechanism. Various asymptotic properties of those estimates are established by using central limit theory. New connections between different estimators are found. A comparative study of newly-designed methods with the corresponding MLEs is performed. Detail investigation has been done for a single parameter Pareto loss model including a simulation study.
Feature-Imitating-Networks (FINs) are neural networks with weights that are initialized to approximate closed-form statistical features. In this work, we perform the first-ever evaluation of FINs for biomedical image processing tasks. We begin by training a set of FINs to imitate six common radiomics features, and then compare the performance of networks with and without the FINs for three experimental tasks: COVID-19 detection from CT scans, brain tumor classification from MRI scans, and brain-tumor segmentation from MRI scans; we find that FINs provide best-in-class performance for all three tasks, while converging faster and more consistently when compared to networks with similar or greater representational power. The results of our experiments provide evidence that FINs may provide state-of-the-art performance for a variety of other biomedical image processing tasks.
The analysis of survey data is a frequently arising issue in clinical trials, particularly when capturing quantities which are difficult to measure using, e.g., a technical device or a biochemical procedure. Typical examples are questionnaires about patient's well-being, pain, anxiety, quality of life or consent to an intervention. Data is captured on a discrete scale containing only a limited (usually three to ten) number of possible answers, of which the respondent has to pick the answer which fits best his personal opinion to the question. This data is generally located on an ordinal scale as answers can usually be arranged in an increasing order, e.g., "bad", "neutral", "good" for well-being or "none", "mild", "moderate", "severe" for pain. Since responses are often stored numerically for data processing purposes, analysis of survey data using ordinary linear regression (OLR) models seems to be natural. However, OLR assumptions are often not met as linear regression requires a constant variability of the response variable and can yield predictions out of the range of response categories. Moreover, in doing so, one only gains insights about the mean response which might, depending on the response distribution, not be very representative. In contrast, ordinal regression models are able to provide probability estimates for all response categories and thus yield information about the full response scale rather than just the mean. Although these methods are well described in the literature, they seem to be rarely applied to biomedical or survey data. In this paper, we give a concise overview about fundamentals of ordinal models, applications to a real data set, outline usage of state-of-the-art-software to do so and point out strengths, limitations and typical pitfalls. This article is a companion work to a current vignette-based structured interview study in paediatric anaesthesia.
One of the most interesting tools that have recently entered the data science toolbox is topological data analysis (TDA). With the explosion of available data sizes and dimensions, identifying and extracting the underlying structure of a given dataset is a fundamental challenge in data science, and TDA provides a methodology for analyzing the shape of a dataset using tools and prospects from algebraic topology. However, the computational complexity makes it quickly infeasible to process large datasets, especially those with high dimensions. Here, we introduce a preprocessing strategy called the Characteristic Lattice Algorithm (CLA), which allows users to reduce the size of a given dataset as desired while maintaining geometric and topological features in order to make the computation of TDA feasible or to shorten its computation time. In addition, we derive a stability theorem and an upper bound of the barcode errors for CLA based on the bottleneck distance.
Within the next decade the Laser Interferometer Space Antenna (LISA) is due to be launched, providing the opportunity to extract physics from stellar objects and systems, such as \textit{Extreme Mass Ratio Inspirals}, (EMRIs) otherwise undetectable to ground based interferometers and Pulsar Timing Arrays (PTA). Unlike previous sources detected by the currently available observational methods, these sources can \textit{only} be simulated using an accurate computation of the gravitational self-force. Whereas the field has seen outstanding progress in the frequency domain, metric reconstruction and self-force calculations are still an open challenge in the time domain. Such computations would not only further corroborate frequency domain calculations and models, but also allow for full self-consistent evolution of the orbit under the effect of the self-force. Given we have \textit{a priori} information about the local structure of the discontinuity at the particle, we will show how to construct discontinuous spatial and temporal discretisations by operating on discontinuous Lagrange and Hermite interpolation formulae and hence recover higher order accuracy. In this work we demonstrate how this technique in conjunction with well-suited gauge choice (hyperboloidal slicing) and numerical (discontinuous collocation with time symmetric) methods can provide a relatively simple method of lines numerical algorithm to the problem. This is the first of a series of papers studying the behaviour of a point-particle prescribing circular geodesic motion in Schwarzschild in the \textit{time domain}. In this work we describe the numerical machinery necessary for these computations and show not only our work is capable of highly accurate flux radiation measurements but it also shows suitability for evaluation of the necessary field and it's derivatives at the particle limit.