We discuss Cartan-Schouten metrics (Riemannian or pseudo-Riemannian metrics that are parallel with respect to the Cartan-Schouten canonical connection) on perfect Lie groups. Applications are foreseen in Information Geometry. Throughout this work, the tangent bundle TG and the cotangent bundle T*G of a Lie group G, are always endowed with their Lie group structures induced by the right trivialization. We show that TG and T*G are isomorphic if G possesses a biinvariant Riemannian or pseudo-Riemannian metric. We also show that, if on a perfect Lie group, there exists a Cartan-Schouten metric, then it must be biinvariant. We compute all such metrics on the cotangent bundles of simple Lie groups. We further show the following. Endowed with their canonical Lie group structures, the set of unit dual quaternions is isomorphic to TSU(2), the set of unit dual split quaternions is isomorphic to T*SL(2,R). The group SE(3) of special rigid displacements of the Euclidean 3-space is isomorphic to T*SO(3). The group SE(2,1) of special rigid displacements of the Minkowski 3-space is isomorphic to T*SO(2,1). Some results on SE(3) by N. Miolane and X. Pennec, and M. Zefran, V. Kumar and C. Croke, are generalized to SE(2,1) and to T*G, for any simple Lie group G.
Continual learning algorithms strive to acquire new knowledge while preserving prior information. Often, these algorithms emphasise stability and restrict network updates upon learning new tasks. In many cases, such restrictions come at a cost to the model's plasticity, i.e. the model's ability to adapt to the requirements of a new task. But is all change detrimental? Here, we approach this question by proposing that activation spaces in neural networks can be decomposed into two subspaces: a readout range in which change affects prior tasks and a null space in which change does not alter prior performance. Based on experiments with this novel technique, we show that, indeed, not all activation change is associated with forgetting. Instead, the only change in the subspace visible to the readout of a task can lead to decreased stability, while restricting change outside of this subspace is associated only with a loss of plasticity. Analysing various commonly used algorithms, we show that regularisation-based techniques do not fully disentangle the two spaces and, as a result, restrict plasticity more than need be. We expand our results by investigating a linear model in which we can manipulate learning in the two subspaces directly and thus causally link activation changes to stability and plasticity. For hierarchical, nonlinear cases, we present an approximation that enables us to estimate functionally relevant subspaces at every layer of a deep nonlinear network, corroborating our previous insights. Together, this work provides novel means to derive insights into the mechanisms behind stability and plasticity in continual learning and may serve as a diagnostic tool to guide developments of future continual learning algorithms that stabilise inference while allowing maximal space for learning.
Continuous level Monte Carlo is an unbiased, continuous version of the celebrated multilevel Monte Carlo method. The approximation level is assumed to be continuous resulting in a stochastic process describing the quantity of interest. Continuous level Monte Carlo methods allow naturally for samplewise adaptive mesh refinements, which are indicated by goal-oriented error estimators. The samplewise refinement levels are drawn in the estimator from an exponentially-distributed random variable. Unfortunately in practical examples this results in higher costs due to high variance in the samples. In this paper we propose a variant of continuous level Monte Carlo, where a quasi Monte Carlo sequence is utilized to "sample" the exponential random variable. We provide a complexity theorem for this novel estimator and show that this results theoretically and practically in a variance reduction of the whole estimator.
We introduce QUICK, a group of novel optimized CUDA kernels for the efficient inference of quantized Large Language Models (LLMs). QUICK addresses the shared memory bank-conflict problem of state-of-the-art mixed precision matrix multiplication kernels. Our method interleaves the quantized weight matrices of LLMs offline to skip the shared memory write-back after the dequantization. We demonstrate up to 1.91x speedup over existing kernels of AutoAWQ on larger batches and up to 1.94x throughput gain on representative LLM models on various NVIDIA GPU devices.
Byzantine consensus allows n processes to decide on a common value, in spite of arbitrary failures. The seminal Dolev-Reischuk bound states that any deterministic solution to Byzantine consensus exchanges Omega(n^2) bits. In recent years, great advances have been made in deterministic Byzantine agreement for partially synchronous networks, with state-of-the-art cryptographic solutions achieving O(n^2 \kappa) bits (where $\kappa$ is the security parameter) and nearly matching the lower bound. In contrast, for synchronous networks, optimal solutions with O(n^2) bits, with no cryptography and the same failure tolerance, have been known for more than three decades. Can this gap in network models be closed? In this paper, we present Repeater, the first generic transformation of Byzantine agreement algorithms from synchrony to partial synchrony. Repeater is modular, relying on existing and novel algorithms for its sub-modules. With the right choice of modules, Repeater requires no additional cryptography, is optimally resilient (n = 3t+1, where t is the maximum number of failures) and, for constant-size inputs, preserves the worst-case per-process bit complexity of the transformed synchronous algorithm. Leveraging Repeater, we present the first partially synchronous algorithm that (1) achieves optimal bit complexity (O(n^2) bits), (2) resists a computationally unbounded adversary (no cryptography), and (3) is optimally-resilient (n = 3t+1), thus showing that the Dolev-Reischuk bound is tight in partial synchrony. Moreover, we adapt Repeater for long inputs, introducing several new algorithms with improved complexity and weaker (or completely absent) cryptographic assumptions.
In the realm of cost-sharing mechanisms, the vulnerability to Sybil strategies - also known as false-name strategies, where agents create fake identities to manipulate outcomes - has not yet been studied. In this paper, we delve into the details of different cost-sharing mechanisms proposed in the literature, highlighting their non-Sybil-resistant nature. Furthermore, we prove that a Sybil-proof cost-sharing mechanism for public excludable goods under mild conditions is at least $(n+1)/2-$approximate. This finding reveals an exponential increase in the worst-case social cost in environments where agents are restricted from using Sybil strategies. To circumvent these negative results, we introduce the concept of \textit{Sybil Welfare Invariant} mechanisms, where a mechanism does not decrease its welfare under Sybil-strategies when agents choose weak dominant strategies and have subjective prior beliefs over other players' actions. Finally, we prove that the Shapley value mechanism for symmetric and submodular cost functions holds this property, and so deduce that the worst-case social cost of this mechanism is the $n$th harmonic number $\mathcal H_n$ under equilibrium with Sybil strategies, matching the worst-case social cost bound for cost-sharing mechanisms. This finding suggests that any group of agents, each with private valuations, can fund public excludable goods both permissionless and anonymously, achieving efficiency comparable to that of permissioned and non-anonymous domains, even when the total number of participants is unknown.
The broad class of multivariate unified skew-normal (SUN) distributions has been recently shown to possess fundamental conjugacy properties. When used as priors for the vector of parameters in general probit, tobit, and multinomial probit models, these distributions yield posteriors that still belong to the SUN family. Although such a core result has led to important advancements in Bayesian inference and computation, its applicability beyond likelihoods associated with fully-observed, discretized, or censored realizations from multivariate Gaussian models remains yet unexplored. This article covers such an important gap by proving that the wider family of multivariate unified skew-elliptical (SUE) distributions, which extends SUNs to more general perturbations of elliptical densities, guarantees conjugacy for broader classes of models, beyond those relying on fully-observed, discretized or censored Gaussians. Such a result leverages the closure under linear combinations, conditioning and marginalization of SUE to prove that such a family is conjugate to the likelihood induced by general multivariate regression models for fully-observed, censored or dichotomized realizations from skew-elliptical distributions. This advancement substantially enlarges the set of models that enable conjugate Bayesian inference to general formulations arising from elliptical and skew-elliptical families, including the multivariate Student's t and skew-t, among others.
Quantization for a Borel probability measure refers to the idea of estimating a given probability by a discrete probability with support containing a finite number of elements. In this paper, we have considered a Borel probability measure $P$ on $\mathbb R^2$, which has support a nonuniform stretched Sierpi\'{n}ski triangle generated by a set of three contractive similarity mappings on $\mathbb R^2$. For this probability measure, we investigate the optimal sets of $n$-means and the $n$th quantization errors for all positive integers $n$.
Following initial work by JaJa and Ahlswede/Cai, and inspired by a recent renewed surge in interest in deterministic identification via noisy channels, we consider the problem in its generality for memoryless channels with finite output, but arbitrary input alphabets. Such a channel is essentially given by (the closure of) the subset of its output distributions in the probability simplex. Our main findings are that the maximum number of messages thus identifiable scales super-exponentially as $2^{R\,n\log n}$ with the block length $n$, and that the optimal rate $R$ is upper and lower bounded in terms of the covering (aka Minkowski, or Kolmogorov, or entropy) dimension $d$ of the output set: $\frac14 d \leq R \leq d$. Leading up to the general case, we treat the important special case of the so-called Bernoulli channel with input alphabet $[0;1]$ and binary output, which has $d=1$, to gain intuition. Along the way, we show a certain Hypothesis Testing Lemma (generalising an earlier insight of Ahlswede regarding the intersection of typical sets) that implies that for the construction of a deterministic identification code, it is sufficient to ensure pairwise reliable distinguishability of the output distributions. These results are then shown to generalise directly to classical-quantum channels with finite-dimensional output quantum system (but arbitrary input alphabet), and in particular to quantum channels on finite-dimensional quantum systems under the constraint that the identification code can only use tensor product inputs.
Purpose: The capacity to isolate and recognize individual characters from facsimile images of papyrus manuscripts yields rich opportunities for digital analysis. For this reason the `ICDAR 2023 Competition on Detection and Recognition of Greek Letters on Papyri' was held as part of the 17th International Conference on Document Analysis and Recognition. This paper discusses our submission to the competition. Methods: We used an ensemble of YOLOv8 models to detect and classify individual characters and employed two different approaches for refining the character predictions, including a transformer based DeiT approach and a ResNet-50 model trained on a large corpus of unlabelled data using SimCLR, a self-supervised learning method. Results: Our submission won the recognition challenge with a mAP of 42.2%, and was runner-up in the detection challenge with a mean average precision (mAP) of 51.4%. At the more relaxed intersection over union threshold of 0.5, we achieved the highest mean average precision and mean average recall results for both detection and classification. Conclusion: The results demonstrate the potential for these techniques for automated character recognition on historical manuscripts. We ran the prediction pipeline on more than 4,500 images from the Oxyrhynchus Papyri to illustrate the utility of our approach, and we release the results publicly in multiple formats.
Mesh-based Graph Neural Networks (GNNs) have recently shown capabilities to simulate complex multiphysics problems with accelerated performance times. However, mesh-based GNNs require a large number of message-passing (MP) steps and suffer from over-smoothing for problems involving very fine mesh. In this work, we develop a multiscale mesh-based GNN framework mimicking a conventional iterative multigrid solver, coupled with adaptive mesh refinement (AMR), to mitigate challenges with conventional mesh-based GNNs. We use the framework to accelerate phase field (PF) fracture problems involving coupled partial differential equations with a near-singular operator due to near-zero modulus inside the crack. We define the initial graph representation using all mesh resolution levels. We perform a series of downsampling steps using Transformer MP GNNs to reach the coarsest graph followed by upsampling steps to reach the original graph. We use skip connectors from the generated embedding during coarsening to prevent over-smoothing. We use Transfer Learning (TL) to significantly reduce the size of training datasets needed to simulate different crack configurations and loading conditions. The trained framework showed accelerated simulation times, while maintaining high accuracy for all cases compared to physics-based PF fracture model. Finally, this work provides a new approach to accelerate a variety of mesh-based engineering multiphysics problems