We consider Markov logic networks and relational logistic regression as two fundamental representation formalisms in statistical relational artificial intelligence that use weighted formulas in their specification. However, Markov logic networks are based on undirected graphs, while relational logistic regression is based on directed acyclic graphs. We show that when scaling the weight parameters with the domain size, the asymptotic behaviour of a relational logistic regression model is transparently controlled by the parameters, and we supply an algorithm to compute asymptotic probabilities. We also show using two examples that this is not true for Markov logic networks. We also discuss using several examples, mainly from the literature, how the application context can help the user to decide when such scaling is appropriate and when using the raw unscaled parameters might be preferable. We highlight random sampling as a particularly promising area of application for scaled models and expound possible avenues for further research.
General first order methods (GFOMs), including various gradient descent and AMP algorithms, constitute a broad class of iterative algorithms in modern statistical learning problems. Some GFOMs also serve as constructive proof devices, iteratively characterizing the empirical distributions of statistical estimators in the large system limits for any fixed number of iterations. This paper develops a non-asymptotic, entrywise characterization for a general class of GFOMs. Our characterizations capture the precise entrywise behavior of the GFOMs, and hold universally across a broad class of heterogeneous random matrix models. As a corollary, we provide the first non-asymptotic description of the empirical distributions of the GFOMs beyond Gaussian ensembles. We demonstrate the utility of these general results in two applications. In the first application, we prove entrywise universality for regularized least squares estimators in the linear model, by controlling the entrywise error relative to a suitably constructed GFOM. This algorithmic proof method also leads to systematically improved averaged universality results for regularized regression estimators in the linear model, and resolves the universality conjecture for (regularized) MLEs in logistic regression. In the second application, we obtain entrywise Gaussian approximations for a class of gradient descent algorithms. Our approach provides non-asymptotic state evolution for the bias and variance of the algorithm along the iteration path, applicable for non-convex loss functions. The proof relies on a new recursive leave-k-out method that provides almost delocalization for the GFOMs and their derivatives. Crucially, our method ensures entrywise universality for up to poly-logarithmic many iterations, which facilitates effective $\ell_2/\ell_\infty$ control between certain GFOMs and statistical estimators in applications.
We propose a variational symplectic numerical method for the time integration of dynamical systems issued from the least action principle. We assume a quadratic internal interpolation of the state between two time steps and we approximate the action in one time step by the Simpson's quadrature formula. The resulting scheme is nonlinear and symplectic. First numerical experiments concern a nonlinear pendulum and we have observed experimentally very good convergence properties.
A non-linear complex system governed by multi-spatial and multi-temporal physics scales cannot be fully understood with a single diagnostic, as each provides only a partial view and much information is lost during data extraction. Combining multiple diagnostics also results in imperfect projections of the system's physics. By identifying hidden inter-correlations between diagnostics, we can leverage mutual support to fill in these gaps, but uncovering these inter-correlations analytically is too complex. We introduce a groundbreaking machine learning methodology to address this issue. Our multimodal approach generates super resolution data encompassing multiple physics phenomena, capturing detailed structural evolution and responses to perturbations previously unobservable. This methodology addresses a critical problem in fusion plasmas: the Edge Localized Mode (ELM), a plasma instability that can severely damage reactor walls. One method to stabilize ELM is using resonant magnetic perturbation to trigger magnetic islands. However, low spatial and temporal resolution of measurements limits the analysis of these magnetic islands due to their small size, rapid dynamics, and complex interactions within the plasma. With super-resolution diagnostics, we can experimentally verify theoretical models of magnetic islands for the first time, providing unprecedented insights into their role in ELM stabilization. This advancement aids in developing effective ELM suppression strategies for future fusion reactors like ITER and has broader applications, potentially revolutionizing diagnostics in fields such as astronomy, astrophysics, and medical imaging.
This paper studies the influence of probabilism and non-determinism on some quantitative aspect X of the execution of a system modeled as a Markov decision process (MDP). To this end, the novel notion of demonic variance is introduced: For a random variable X in an MDP M, it is defined as 1/2 times the maximal expected squared distance of the values of X in two independent execution of M in which also the non-deterministic choices are resolved independently by two distinct schedulers. It is shown that the demonic variance is between 1 and 2 times as large as the maximal variance of X in M that can be achieved by a single scheduler. This allows defining a non-determinism score for M and X measuring how strongly the difference of X in two executions of M can be influenced by the non-deterministic choices. Properties of MDPs M with extremal values of the non-determinism score are established. Further, the algorithmic problems of computing the maximal variance and the demonic variance are investigated for two random variables, namely weighted reachability and accumulated rewards. In the process, also the structure of schedulers maximizing the variance and of scheduler pairs realizing the demonic variance is analyzed.
We describe a simple deterministic near-linear time approximation scheme for uncapacitated minimum cost flow in undirected graphs with real edge weights, a problem also known as transshipment. Specifically, our algorithm takes as input a (connected) undirected graph $G = (V, E)$, vertex demands $b \in \mathbb{R}^V$ such that $\sum_{v \in V} b(v) = 0$, positive edge costs $c \in \mathbb{R}_{>0}^E$, and a parameter $\varepsilon > 0$. In $O(\varepsilon^{-2} m \log^{O(1)} n)$ time, it returns a flow $f$ such that the net flow out of each vertex is equal to the vertex's demand and the cost of the flow is within a $(1 + \varepsilon)$ factor of optimal. Our algorithm is combinatorial and has no running time dependency on the demands or edge costs. With the exception of a recent result presented at STOC 2022 for polynomially bounded edge weights, all almost- and near-linear time approximation schemes for transshipment relied on randomization to embed the problem instance into low-dimensional space. Our algorithm instead deterministically approximates the cost of routing decisions that would be made if the input were subject to a random tree embedding. To avoid computing the $\Omega(n^2)$ vertex-vertex distances that an approximation of this kind suggests, we also take advantage of the clustering method used in the well-known Thorup-Zwick distance oracle.
Classical statistics deals with determined and precise data analysis. But in reality, there are many cases where the information is not accurate and a degree of impreciseness, uncertainty, incompleteness, and vagueness is observed. In these situations, uncertainties can make classical statistics less accurate. That is where neutrosophic statistics steps in to improve accuracy in data analysis. In this article, we consider the Birnbaum-Saunders distribution (BSD) which is very flexible and practical for real world data modeling. By integrating the neutrosophic concept, we improve the BSD's ability to manage uncertainty effectively. In addition, we provide maximum likelihood parameter estimates. Subsequently, we illustrate the practical advantages of the neutrosophic model using two cases from the industrial and environmental fields. This paper emphasizes the significance of the neutrosophic BSD as a robust solution for modeling and analysing imprecise data, filling a crucial gap left by classical statistical methods.
In relational verification, judicious alignment of computational steps facilitates proof of relations between programs using simple relational assertions. Relational Hoare logics (RHL) provide compositional rules that embody various alignments of executions. Seemingly more flexible alignments can be expressed in terms of product automata based on program transition relations. A single degenerate alignment rule (self-composition), atop a complete Hoare logic, comprises a RHL for $\forall\forall$ properties that is complete in the ordinary logical sense (Cook'78). The notion of alignment completeness was previously proposed as a more satisfactory measure, and some rules were shown to be alignment complete with respect to a few ad hoc forms of alignment automata. This paper proves alignment completeness with respect to a general class of $\forall\forall$ alignment automata, for a RHL comprised of standard rules together with a rule of semantics-preserving rewrites based on Kleene algebra with tests. A new logic for $\forall\exists$ properties is introduced and shown to be alignment complete. The $\forall\forall$ and $\forall\exists$ automata are shown to be semantically complete. Thus the logics are both complete in the ordinary sense. Recent work by D'Osualdo et al highlights the importance of completeness relative to assumptions (which we term entailment completeness), and presents $\forall\forall$ examples seemingly beyond the scope of RHLs. Additional rules enable these examples to be proved in our RHL, shedding light on the open problem of entailment completeness.
One of the hallmark achievements of the theory of graphical models and Bayesian model selection is the celebrated greedy equivalence search (GES) algorithm due to Chickering and Meek. GES is known to consistently estimate the structure of directed acyclic graph (DAG) models in various special cases including Gaussian and discrete models, which are in particular curved exponential families. A general theory that covers general nonparametric DAG models, however, is missing. Here, we establish the consistency of greedy equivalence search for general families of DAG models that satisfy smoothness conditions on the Markov factorization, and hence may not be curved exponential families, or even parametric. The proof leverages recent advances in nonparametric Bayes to construct a test for comparing misspecified DAG models that avoids arguments based on the Laplace approximation. Nonetheless, when the Laplace approximation is valid and a consistent scoring function exists, we recover the classical result. As a result, we obtain a general consistency theorem for GES applied to general DAG models.
Complex networks are critical in many scientific, technological, and societal contexts due to their ability to represent and analyze intricate systems with interdependent components. Often, after labeling the nodes of a network with a community detection algorithm, its modular organization emerges, allowing a better understanding of the underlying structure by uncovering hidden relationships. In this paper, we introduce a novel information-geometric framework for the filtering and decomposition of networks whose nodes have been labeled. Our approach considers the labeled network as the outcome of a Markov random field modeled by a q-state Potts model. According to information geometry, the first and second order Fisher information matrices are related to the metric and curvature tensor of the parametric space of a statistical model. By computing an approximation to the local shape operator, the proposed methodology is able to identify low and high information nodes, allowing the decomposition of the labeled network in two complementary subgraphs. Hence, we call this method as the LO-HI decomposition. Experimental results with several kinds of networks show that the high information subgraph is often related to edges and boundaries, while the low information subgraph is a smoother version of the network, in the sense that the modular structure is improved.
We derive information-theoretic generalization bounds for supervised learning algorithms based on the information contained in predictions rather than in the output of the training algorithm. These bounds improve over the existing information-theoretic bounds, are applicable to a wider range of algorithms, and solve two key challenges: (a) they give meaningful results for deterministic algorithms and (b) they are significantly easier to estimate. We show experimentally that the proposed bounds closely follow the generalization gap in practical scenarios for deep learning.