We revisit the classical result of Morris et al.~(AAAI'19) that message-passing graphs neural networks (MPNNs) are equal in their distinguishing power to the Weisfeiler--Leman (WL) isomorphism test. Morris et al.~show their simulation result with ReLU activation function and $O(n)$-dimensional feature vectors, where $n$ is the number of nodes of the graph. Recently, by introducing randomness into the architecture, Aamand et al.~(NeurIPS'22) were able to improve this bound to $O(\log n)$-dimensional feature vectors, although at the expense of guaranteeing perfect simulation only with high probability. In all these constructions, to guarantee equivalence to the WL test, the dimension of feature vectors in the MPNN has to increase with the size of the graphs. However, architectures used in practice have feature vectors of constant dimension. Thus, there is a gap between the guarantees provided by these results and the actual characteristics of architectures used in practice. In this paper we close this gap by showing that, for \emph{any} non-polynomial analytic (like the sigmoid) activation function, to guarantee that MPNNs are equivalent to the WL test, feature vectors of dimension $d=1$ is all we need, independently of the size of the graphs. Our main technical insight is that for simulating multi-sets in the WL-test, it is enough to use linear independence of feature vectors over rationals instead of reals. Countability of the set of rationals together with nice properties of analytic functions allow us to carry out the simulation invariant over the iterations of the WL test without increasing the dimension of the feature vectors.
This work addresses the problem of high-dimensional classification by exploring the generalized Bayesian logistic regression method under a sparsity-inducing prior distribution. The method involves utilizing a fractional power of the likelihood resulting the fractional posterior. Our study yields concentration results for the fractional posterior, not only on the joint distribution of the predictor and response variable but also for the regression coefficients. Significantly, we derive novel findings concerning misclassification excess risk bounds using sparse generalized Bayesian logistic regression. These results parallel recent findings for penalized methods in the frequentist literature. Furthermore, we extend our results to the scenario of model misspecification, which is of critical importance.
In this article, the stabilizer free weak Galerkin (SFWG) finite element method is applied to the Ciarlet-Raviart mixed form of the Biharmonic equation. We utilize the SFWG solutions of the second elliptic problems to define projection operators, build error equations, and further derive the error estimates. Finally, numerical examples support the results reached by the theory.
To date, most methods for simulating conditioned diffusions are limited to the Euclidean setting. The conditioned process can be constructed using a change of measure known as Doob's $h$-transform. The specific type of conditioning depends on a function $h$ which is typically unknown in closed form. To resolve this, we extend the notion of guided processes to a manifold $M$, where one replaces $h$ by a function based on the heat kernel on $M$. We consider the case of a Brownian motion with drift, constructed using the frame bundle of $M$, conditioned to hit a point $x_T$ at time $T$. We prove equivalence of the laws of the conditioned process and the guided process with a tractable Radon-Nikodym derivative. Subsequently, we show how one can obtain guided processes on any manifold $N$ that is diffeomorphic to $M$ without assuming knowledge of the heat kernel on $N$. We illustrate our results with numerical simulations and an example of parameter estimation where a diffusion process on the torus is observed discretely in time.
The present work explores the theoretical limits of Machine Learning (ML) within the framework of Kolmogorov's theory of Algorithmic Probability, which clarifies the notion of entropy as Expected Kolmogorov Complexity and formalizes other fundamental concepts such as Occam's razor via Levin's Universal Distribution. As a fundamental application, we develop Maximum Entropy methods that allow us to derive the Erd\H{o}s-Kac Law and Hardy-Ramanujan theorem in Probabilistic Number Theory, and establish the impossibility of discovering a formula for primes using Machine Learning via the Prime Coding Theorem.
Mendelian randomization uses genetic variants as instrumental variables to make causal inferences about the effects of modifiable risk factors on diseases from observational data. One of the major challenges in Mendelian randomization is that many genetic variants are only modestly or even weakly associated with the risk factor of interest, a setting known as many weak instruments. Many existing methods, such as the popular inverse-variance weighted (IVW) method, could be biased when the instrument strength is weak. To address this issue, the debiased IVW (dIVW) estimator, which is shown to be robust to many weak instruments, was recently proposed. However, this estimator still has non-ignorable bias when the effective sample size is small. In this paper, we propose a modified debiased IVW (mdIVW) estimator by multiplying a modification factor to the original dIVW estimator. After this simple correction, we show that the bias of the mdIVW estimator converges to zero at a faster rate than that of the dIVW estimator under some regularity conditions. Moreover, the mdIVW estimator has smaller variance than the dIVW estimator.We further extend the proposed method to account for the presence of instrumental variable selection and balanced horizontal pleiotropy. We demonstrate the improvement of the mdIVW estimator over the dIVW estimator through extensive simulation studies and real data analysis.
Stability and optimal convergence analysis of a non-uniform implicit-explicit L1 finite element method (IMEX-L1-FEM) is studied for a class of time-fractional linear partial differential/integro-differential equations with non-self-adjoint elliptic part having (space-time) variable coefficients. The proposed scheme is based on a combination of an IMEX-L1 method on graded mesh in the temporal direction and a finite element method in the spatial direction. With the help of a discrete fractional Gr\"{o}nwall inequality, global almost optimal error estimates in $L^2$- and $H^1$-norms are derived for the problem with initial data $u_0 \in H_0^1(\Omega)\cap H^2(\Omega)$. The novelty of our approach is based on managing the interaction of the L1 approximation of the fractional derivative and the time discrete elliptic operator to derive the optimal estimate in $H^1$-norm directly. Furthermore, a super convergence result is established when the elliptic operator is self-adjoint with time and space varying coefficients, and as a consequence, an $L^\infty$ error estimate is obtained for 2D problems that too with the initial condition is in $ H_0^1(\Omega)\cap H^2(\Omega)$. All results proved in this paper are valid uniformly as $\alpha\longrightarrow 1^{-}$, where $\alpha$ is the order of the Caputo fractional derivative. Numerical experiments are presented to validate our theoretical findings.
Symplectic integrators are widely implemented numerical integrators for Hamiltonian mechanics, which preserve the Hamiltonian structure (symplecticity) of the system. Although the symplectic integrator does not conserve the energy of the system, it is well known that there exists a conserving modified Hamiltonian, called the shadow Hamiltonian. For the Nambu mechanics, which is one of the generalized Hamiltonian mechanics, we can also construct structure-preserving integrators by the same procedure used to construct the symplectic integrators. In the structure-preserving integrator, however, the existence of shadow Hamiltonians is non-trivial. This is because the Nambu mechanics is driven by multiple Hamiltonians and it is non-trivial whether the time evolution by the integrator can be cast into the Nambu mechanical time evolution driven by multiple shadow Hamiltonians. In the present paper we construct structure-preserving integrators for a simple Nambu mechanical system, and derive the shadow Hamiltonians in two ways. This is the first attempt to derive shadow Hamiltonians of structure-preserving integrators for Nambu mechanics. We show that the fundamental identity, which corresponds to the Jacobi identity in Hamiltonian mechanics, plays an important role to calculate the shadow Hamiltonians using the Baker-Campbell-Hausdorff formula. It turns out that the resulting shadow Hamiltonians have indefinite forms depending on how the fundamental identities are used. This is not a technical artifact, because the exact shadow Hamiltonians obtained independently have the same indefiniteness.
Evaluating the expected information gain (EIG) is a critical task in many areas of computational science and statistics, necessitating the approximation of nested integrals. Available techniques for this problem based on quasi-Monte Carlo (QMC) methods have focused on enhancing the efficiency of either the inner or outer integral approximation. In this work, we introduce a novel approach that extends the scope of these efforts to address inner and outer expectations simultaneously. Leveraging the principles of Owen's scrambling of digital nets, we develop a randomized QMC (rQMC) method that improves the convergence behavior of the approximation of nested integrals. We also indicate how to combine this methodology with importance sampling to address a measure concentration arising in the inner integral. Our method capitalizes on the unique structure of nested expectations to offer a more efficient approximation mechanism. By incorporating Owen's scrambling techniques, we handle integrands exhibiting infinite variation in the Hardy--Krause sense, paving the way for theoretically sound error estimates. As the main contribution of this work, we derive asymptotic error bounds for the bias and variance of our estimator, along with regularity conditions under which these error bounds can be attained. In addition, we provide nearly optimal sample sizes for the rQMC approximations, which are helpful for the actual numerical implementations. Moreover, we verify the quality of our estimator through numerical experiments in the context of EIG estimation. Specifically, we compare the computational efficiency of our rQMC method against standard nested MC integration across two case studies: one in thermo-mechanics and the other in pharmacokinetics. These examples highlight our approach's computational savings and enhanced applicability.
As language models (LMs) become more capable, it is increasingly important to align them with human preferences. However, the dominant paradigm for training Preference Models (PMs) for that purpose suffers from fundamental limitations, such as lack of transparency and scalability, along with susceptibility to overfitting the preference dataset. We propose Compositional Preference Models (CPMs), a novel PM framework that decomposes one global preference assessment into several interpretable features, obtains scalar scores for these features from a prompted LM, and aggregates these scores using a logistic regression classifier. Through these simple steps, CPMs allow to control which properties of the preference data are used to train the preference model and to build it based on features that are believed to underlie the human preference judgment. Our experiments show that CPMs not only improve generalization and are more robust to overoptimization than standard PMs, but also that best-of-n samples obtained using CPMs tend to be preferred over samples obtained using conventional PMs. Overall, our approach demonstrates the benefits of endowing PMs with priors about which features determine human preferences while relying on LM capabilities to extract those features in a scalable and robust way.
Non-probability survey samples are examples of data sources that have become increasingly popular in recent years, also in official statistics. However, statistical inference based on non-probability samples is much more difficult because they are biased and are not representative of the target population (Wu, 2022). In this paper we consider a method of joint calibration for totals (Deville & S\"arndal, 1992) and quantiles (Harms & Duchesne, 2006) and use the proposed approach to extend existing inference methods for non-probability samples, such as inverse probability weighting, mass imputation and doubly robust estimators. By including quantile information in the estimation process non-linear relationships between the target and auxiliary variables can be approximated the way it is done in step-wise (constant) regression. Our simulation study has demonstrated that the estimators in question are more robust against model mis-specification and, as a result, help to reduce bias and improve estimation efficiency. Variance estimation for our proposed approach is also discussed. We show that existing inference methods can be used and that the resulting confidence intervals are at nominal levels. Finally, we applied the proposed methods to estimate the share of vacancies aimed at Ukrainian workers in Poland using an integrated set of administrative and survey data about job vacancies. The proposed approaches have been implemented in two R packages (nonprobsvy and jointCalib), which were used to conduct the simulation and empirical study