The $\boldsymbol{\beta}$-model for random graphs is commonly used for representing pairwise interactions in a network with degree heterogeneity. Going beyond pairwise interactions, Stasi et al. (2014) introduced the hypergraph $\boldsymbol{\beta}$-model for capturing degree heterogeneity in networks with higher-order (multi-way) interactions. In this paper we initiate the rigorous study of the hypergraph $\boldsymbol{\beta}$-model with multiple layers, which allows for hyperedges of different sizes across the layers. To begin with, we derive the rates of convergence of the maximum likelihood (ML) estimate and establish their minimax rate optimality. We also derive the limiting distribution of the ML estimate and construct asymptotically valid confidence intervals for the model parameters. Next, we consider the goodness-of-fit problem in the hypergraph $\boldsymbol{\beta}$-model. Specifically, we establish the asymptotic normality of the likelihood ratio (LR) test under the null hypothesis, derive its detection threshold, and also its limiting power at the threshold. Interestingly, the detection threshold of the LR test turns out to be minimax optimal, that is, all tests are asymptotically powerless below this threshold. The theoretical results are further validated in numerical experiments. In addition to developing the theoretical framework for estimation and inference for hypergraph $\boldsymbol{\beta}$-models, the above results fill a number of gaps in the graph $\boldsymbol{\beta}$-model literature, such as the minimax optimality of the ML estimates and the non-null properties of the LR test, which, to the best of our knowledge, have not been studied before.
Originating in Girard's Linear logic, Ehrhard and Regnier's Taylor expansion of $\lambda$-terms has been broadly used as a tool to approximate the terms of several variants of the $\lambda$-calculus. Many results arise from a Commutation theorem relating the normal form of the Taylor expansion of a term to its B\"ohm tree. This led us to consider extending this formalism to the infinitary $\lambda$-calculus, since the $\Lambda_{\infty}^{001}$ version of this calculus has B\"ohm trees as normal forms and seems to be the ideal framework to reformulate the Commutation theorem. We give a (co-)inductive presentation of $\Lambda_{\infty}^{001}$. We define a Taylor expansion on this calculus, and state that the infinitary $\beta$-reduction can be simulated through this Taylor expansion. The target language is the usual resource calculus, and in particular the resource reduction remains finite, confluent and terminating. Finally, we state the generalised Commutation theorem and use our results to provide simple proofs of some normalisation and confluence properties in the infinitary $\lambda$-calculus.
As in other estimation scenarios, likelihood based estimation in the normal mixture set-up is highly non-robust against model misspecification and presence of outliers (apart from being an ill-posed optimization problem). A robust alternative to the ordinary likelihood approach for this estimation problem is proposed which performs simultaneous estimation and data clustering and leads to subsequent anomaly detection. To invoke robustness, the methodology based on the minimization of the density power divergence (or alternatively, the maximization of the $\beta$-likelihood) is utilized under suitable constraints. An iteratively reweighted least squares approach has been followed in order to compute the proposed estimators for the component means (or equivalently cluster centers) and component dispersion matrices which leads to simultaneous data clustering. Some exploratory techniques are also suggested for anomaly detection, a problem of great importance in the domain of statistics and machine learning. The proposed method is validated with simulation studies under different set-ups; it performs competitively or better compared to the popular existing methods like K-medoids, TCLUST, trimmed K-means and MCLUST, especially when the mixture components (i.e., the clusters) share regions with significant overlap or outlying clusters exist with small but non-negligible weights (particularly in higher dimensions). Two real datasets are also used to illustrate the performance of the newly proposed method in comparison with others along with an application in image processing. The proposed method detects the clusters with lower misclassification rates and successfully points out the outlying (anomalous) observations from these datasets.
We show that the cohomology of the Regge complex in three dimensions is isomorphic to $\mathcal{H}^{{\scriptscriptstyle \bullet}}_{dR}(\Omega)\otimes\mathcal{RM}$, the infinitesimal-rigid-body-motion-valued de~Rham cohomology. Based on an observation that the twisted de~Rham complex extends the elasticity (Riemannian deformation) complex to the linearized version of coframes, connection 1-forms, curvature and Cartan's torsion, we construct a discrete version of linearized Riemann-Cartan geometry on any triangulation and determine its cohomology.
We give optimally fast $O(\log p)$ time (per processor) algorithms for computing round-optimal broadcast schedules for message-passing parallel computing systems. This affirmatively answers the questions posed in Tr\"aff (2022). The problem is to broadcast $n$ indivisible blocks of data from a given root processor to all other processors in a (subgraph of a) fully connected network of $p$ processors with fully bidirectional, one-ported communication capabilities. In this model, $n-1+\lceil\log_2 p\rceil$ communication rounds are required. Our new algorithms compute for each processor in the network receive and send schedules each of size $\lceil\log_2 p\rceil$ that determine uniquely in $O(1)$ time for each communication round the new block that the processor will receive, and the already received block it has to send. Schedule computations are done independently per processor without communication. The broadcast communication subgraph is the same, easily computable, directed, $\lceil\log_2 p\rceil$-regular circulant graph used in Tr\"aff (2022) and elsewhere. We show how the schedule computations can be done in optimal time and space of $O(\log p)$, improving significantly over previous results of $O(p\log^2 p)$ and $O(\log^3 p)$. The schedule computation and broadcast algorithms are simple to implement, but correctness and complexity are not obvious. All algorithms have been implemented, compared to previous algorithms, and briefly evaluated on a small $36\times 32$ processor-core cluster.
This study examines 4-bit quantization methods like GPTQ in large language models (LLMs), highlighting GPTQ's overfitting and limited enhancement in Zero-Shot tasks. While prior works merely focusing on zero-shot measurement, we extend task scope to more generative categories such as code generation and abstractive summarization, in which we found that INT4 quantization can significantly underperform. However, simply shifting to higher precision formats like FP6 has been particularly challenging, thus overlooked, due to poor performance caused by the lack of sophisticated integration and system acceleration strategies on current AI hardware. Our results show that FP6, even with a coarse-grain quantization scheme, performs robustly across various algorithms and tasks, demonstrating its superiority in accuracy and versatility. Notably, with the FP6 quantization, \codestar-15B model performs comparably to its FP16 counterpart in code generation, and for smaller models like the 406M it closely matches their baselines in summarization. Neither can be achieved by INT4. To better accommodate various AI hardware and achieve the best system performance, we propose a novel 4+2 design for FP6 to achieve similar latency to the state-of-the-art INT4 fine-grain quantization. With our design, FP6 can become a promising solution to the current 4-bit quantization methods used in LLMs.
Subgraph-based graph representation learning (SGRL) has recently emerged as a powerful tool in many prediction tasks on graphs due to its advantages in model expressiveness and generalization ability. Most previous SGRL models face computational issues associated with the high cost of subgraph extraction for each training or test query. Recently, SUREL was proposed to accelerate SGRL, which samples random walks offline and joins these walks online as a proxy of subgraphs for representation learning. Thanks to the reusability of sampled walks across different queries, SUREL achieves state-of-the-art performance in terms of scalability and prediction accuracy. However, SUREL still suffers from high computational overhead caused by node redundancy in sampled walks. In this work, we propose a novel framework SUREL+ that upgrades SUREL by using node sets instead of walks to represent subgraphs. This set-based representation avoids repeated nodes by definition, but node sets can be irregular in size. To address this issue, we design a customized sparse data structure to efficiently store and index node sets, and provide a specialized operator to join them in parallel batches. SUREL+ is modularized to support multiple types of set samplers, structural features, and neural encoders to complement the structure information loss after the reduction from walks to sets. Extensive experiments have been performed to validate SUREL+ in the prediction tasks of links, relation types, and higher-order patterns. SUREL+ achieves 3-11$\times$ speedups of SUREL while maintaining comparable or even better prediction performance; compared to other SGRL baselines, SUREL+ achieves $\sim$20$\times$ speedups and significantly improves the prediction accuracy.
We propose a threshold-type algorithm to the $L^2$-gradient flow of the Canham-Helfrich functional generalized to $\mathbb{R}^N$. The algorithm to the Willmore flow is derived as a special case in $\mathbb{R}^2$ or $\mathbb{R}^3$. This algorithm is constructed based on an asymptotic expansion of the solution to the initial value problem for a fourth order linear parabolic partial differential equation whose initial data is the indicator function on the compact set $\Omega_0$. The crucial points are to prove that the boundary $\partial\Omega_1$ of the new set $\Omega_1$ generated by our algorithm is included in $O(t)$-neighborhood from $\partial\Omega_0$ for small time $t>0$ and to show that the derivative of the threshold function in the normal direction for $\partial\Omega_0$ is far from zero in the small time interval. Finally, numerical examples of planar curves governed by the Willmore flow are provided by using our threshold-type algorithm.
We present algorithms for the computation of $\varepsilon$-coresets for $k$-median clustering of point sequences in $\mathbb{R}^d$ under the $p$-dynamic time warping (DTW) distance. Coresets under DTW have not been investigated before, and the analysis is not directly accessible to existing methods as DTW is not a metric. The three main ingredients that allow our construction of coresets are the adaptation of the $\varepsilon$-coreset framework of sensitivity sampling, bounds on the VC dimension of approximations to the range spaces of balls under DTW, and new approximation algorithms for the $k$-median problem under DTW. We achieve our results by investigating approximations of DTW that provide a trade-off between the provided accuracy and amenability to known techniques. In particular, we observe that given $n$ curves under DTW, one can directly construct a metric that approximates DTW on this set, permitting the use of the wealth of results on metric spaces for clustering purposes. The resulting approximations are the first with polynomial running time and achieve a very similar approximation factor as state-of-the-art techniques. We apply our results to produce a practical algorithm approximating $(k,\ell)$-median clustering under DTW.
We compute the Tracy-Widom distribution describing the asymptotic distribution of the largest eigenvalue of a large random matrix by solving a boundary-value problem posed by Bloemendal. The distribution is computed in two ways. The first method is a second-order finite-difference method and the second is a highly accurate Fourier spectral method. Since $\beta$ is simply a parameter in the boundary-value problem, any $\beta> 0$ can be used, in principle. The limiting distribution of the $n$th largest eigenvalue can also be computed. Our methods are available in the Julia package TracyWidomBeta.jl.
These are self-contained lecture notes for spectral independence. For an $n$-vertex graph, the spectral independence condition is a bound on the maximum eigenvalue of the $n\times n$ influence matrix whose entries capture the influence between pairs of vertices, it is closely related to the covariance matrix. We will present recent results showing that spectral independence implies the mixing time of the Glauber dynamics is polynomial (where the degree of the polynomial depends on certain parameters). The proof utilizes local-to-global theorems which we will detail in these notes. Finally, we will present more recent results showing that spectral independence implies an optimal bound on the relaxation time (inverse spectral gap) and with some additional conditions implies an optimal mixing time bound of $O(n\log{n})$ for the Glauber dynamics. We also present the results of Anari, Liu, Oveis Gharan, and Vinzant (2019) for generating a random basis of a matroid. The analysis of the associated bases-exchange walk utilizes the local-to-global theorems used for spectral independence with the Trickle-Down Theorem of Oppenheim (2018) to analyze the local walks. Our focus in these notes is on the analysis of the spectral gap of the associated Markov chains from a functional analysis perspective, and we present proofs of the associated local-to-global theorems from this same Markov chain perspective.