The $\boldsymbol{\beta}$-model for random graphs is commonly used for representing pairwise interactions in a network with degree heterogeneity. Going beyond pairwise interactions, Stasi et al. (2014) introduced the hypergraph $\boldsymbol{\beta}$-model for capturing degree heterogeneity in networks with higher-order (multi-way) interactions. In this paper we initiate the rigorous study of the hypergraph $\boldsymbol{\beta}$-model with multiple layers, which allows for hyperedges of different sizes across the layers. To begin with, we derive the rates of convergence of the maximum likelihood (ML) estimate and establish their minimax rate optimality. We also derive the limiting distribution of the ML estimate and construct asymptotically valid confidence intervals for the model parameters. Next, we consider the goodness-of-fit problem in the hypergraph $\boldsymbol{\beta}$-model. Specifically, we establish the asymptotic normality of the likelihood ratio (LR) test under the null hypothesis, derive its detection threshold, and also its limiting power at the threshold. Interestingly, the detection threshold of the LR test turns out to be minimax optimal, that is, all tests are asymptotically powerless below this threshold. The theoretical results are further validated in numerical experiments. In addition to developing the theoretical framework for estimation and inference for hypergraph $\boldsymbol{\beta}$-models, the above results fill a number of gaps in the graph $\boldsymbol{\beta}$-model literature, such as the minimax optimality of the ML estimates and the non-null properties of the LR test, which, to the best of our knowledge, have not been studied before.
Originating in Girard's Linear logic, Ehrhard and Regnier's Taylor expansion of $\lambda$-terms has been broadly used as a tool to approximate the terms of several variants of the $\lambda$-calculus. Many results arise from a Commutation theorem relating the normal form of the Taylor expansion of a term to its B\"ohm tree. This led us to consider extending this formalism to the infinitary $\lambda$-calculus, since the $\Lambda_{\infty}^{001}$ version of this calculus has B\"ohm trees as normal forms and seems to be the ideal framework to reformulate the Commutation theorem. We give a (co-)inductive presentation of $\Lambda_{\infty}^{001}$. We define a Taylor expansion on this calculus, and state that the infinitary $\beta$-reduction can be simulated through this Taylor expansion. The target language is the usual resource calculus, and in particular the resource reduction remains finite, confluent and terminating. Finally, we state the generalised Commutation theorem and use our results to provide simple proofs of some normalisation and confluence properties in the infinitary $\lambda$-calculus.
The spreading of prion proteins is at the basis of brain neurodegeneration. The paper deals with the numerical modelling of the misfolding process of $\alpha$-synuclein in Parkinson's disease. We introduce and analyze a discontinuous Galerkin method for the semi-discrete approximation of the Fisher-Kolmogorov (FK) equation that can be employed to model the process. We employ a discontinuous Galerkin method on polygonal and polyhedral grids (PolyDG) for space discretization, which allows us to accurately simulate the wavefronts typically observed in the prionic spreading. We prove stability and a priori error estimates for the semi-discrete formulation. Next, we use a Crank-Nicolson scheme to advance in time. For the numerical verification of our numerical model, we first consider a manufactured solution, and then we consider a case with wavefront propagation in two-dimensional polygonal grids. Next, we carry out a simulation of $\alpha$-synuclein spreading in a two-dimensional brain slice in the sagittal plane with a polygonal agglomerated grid that takes full advantage of the flexibility of PolyDG approximation. Finally, we present a simulation in a three-dimensional patient-specific brain geometry reconstructed from magnetic resonance images.
Over the past decade, previous balanced datasets have been used to advance deep learning algorithms for industrial applications. In urban infrastructures and living environments, damage data mining cannot avoid imbalanced data issues because of rare unseen events and the high-quality status of improved operations. For visual inspection, the deteriorated class acquired from the surface of concrete and steel components are occasionally imbalanced. From numerous related surveys, we conclude that imbalanced data problems can be categorised into four types: 1) missing range of target and label valuables, 2) majority-minority class imbalance, 3) foreground background of spatial imbalance, and 4) long-tailed class of pixel-wise imbalance. Since 2015, many imbalanced studies have been conducted using deep-learning approaches, including regression, image classification, object detection, and semantic segmentation. However, anomaly detection for imbalanced data is not well known. In this study, we highlight a one-class anomaly detection application, whether anomalous class or not, and demonstrate clear examples of imbalanced vision datasets: medical disease, hazardous behaviour, material deterioration, plant disease, river sludge, and disaster damage. We provide key results on the advantage of damage-vision mining, hypothesising that the more effective the range of the positive ratio, the higher the accuracy gain of the anomalies feedback. In our imbalanced studies, compared with the balanced case with a positive ratio of $1/1$, we find that there is an applicable positive ratio $1/a$ where the accuracy is consistently high. However, the extremely imbalanced range is from one shot to $1/2a$, the accuracy of which is inferior to that of the applicable ratio. In contrast, with a positive ratio ranging over $2/a$, it shifts in the over-mining phase without an effective gain in accuracy.
Originating in Girard's Linear logic, Ehrhard and Regnier's Taylor expansion of $\lambda$-terms has been broadly used as a tool to approximate the terms of several variants of the $\lambda$-calculus. Many results arise from a Commutation theorem relating the normal form of the Taylor expansion of a term to its B\"ohm tree. This led us to consider extending this formalism to the infinitary $\lambda$-calculus, since the $\Lambda_{\infty}^{001}$ version of this calculus has B\"ohm trees as normal forms and seems to be the ideal framework to reformulate the Commutation theorem. We give a (co-)inductive presentation of $\Lambda_{\infty}^{001}$. We define a Taylor expansion on this calculus, and state that the infinitary $\beta$-reduction can be simulated through this Taylor expansion. The target language is the usual resource calculus, and in particular the resource reduction remains finite, confluent and terminating. Finally, we state the generalised Commutation theorem and use our results to provide simple proofs of some normalisation and confluence properties in the infinitary $\lambda$-calculus.
Digital processing-in-memory (PIM) architectures mitigate the memory wall problem by facilitating parallel bitwise operations directly within memory. Recent works have demonstrated their algorithmic potential for accelerating data-intensive applications; however, there remains a significant gap in the programming model and microarchitectural design. This is further exacerbated by the emerging model of partitions, which significantly complicates control and periphery. Therefore, inspired by NVIDIA CUDA, this paper provides an end-to-end architectural integration of digital memristive PIM from an abstract high-level C++ programming interface for vector operations to the low-level microarchitecture. We begin by proposing an efficient microarchitecture and instruction set architecture (ISA) that bridge the gap between the low-level control periphery and an abstraction of PIM parallelism into warps and threads. We subsequently propose a PIM compilation library that converts high-level C++ to ISA instructions, and a PIM driver that translates ISA instructions into PIM micro-operations. This drastically simplifies the development of PIM applications and enables PIM integration within larger existing C++ CPU/GPU programs for heterogeneous computing with significant ease. Lastly, we present an efficient GPU-accelerated simulator for the proposed PIM microarchitecture. Although slower than a theoretical PIM chip, this simulator provides an accessible platform for developers to start executing and debugging PIM algorithms. To validate our approach, we implement state-of-the-art matrix operations and FFT PIM-based algorithms as case studies. These examples demonstrate drastically simplified development without compromising performance, showing the potential and significance of CUDA-PIM.
We study $L_2$-approximation problems in the worst case setting in the weighted Korobov spaces $H_{d,\a,{\bm \ga}}$ with parameters $1\ge \ga_1\ge \ga_2\ge \cdots\ge 0$ and $\frac1 2<\az_1\le \az_2\le \cdots$. We consider the worst case error of algorithms that use finitely many arbitrary continuous linear functionals. We discuss the strongly polynomial tractability (SPT), polynomial tractability (PT), and $(t_1,t_2)$-weak tractability ($(t_1,t_2)$-WT) for all $t_1>1$ and $t_2>0$ under the absolute or normalized error criterion. In particular, we obtain the matching necessary and sufficient condition for SPT or PT in terms of the parameters.
$\textbf{Motivation:}$ Small $p$-values are often required to be accurately estimated in large-scale genomic studies for the adjustment of multiple hypothesis tests and the ranking of genomic features based on their statistical significance. For those complicated test statistics whose cumulative distribution functions are analytically intractable, existing methods usually do not work well with small $p$-values due to lack of accuracy or computational restrictions. We propose a general approach for accurately and efficiently estimating small $p$-values for a broad range of complicated test statistics based on the principle of the cross-entropy method and Markov chain Monte Carlo sampling techniques. $\textbf{Results:}$ We evaluate the performance of the proposed algorithm through simulations and demonstrate its application to three real-world examples in genomic studies. The results show that our approach can accurately evaluate small to extremely small $p$-values (e.g. $10^{-6}$ to $10^{-100}$). The proposed algorithm is helpful for the improvement of some existing test procedures and the development of new test procedures in genomic studies.
The study of nonplanar drawings of graphs with restricted crossing configurations is a well-established topic in graph drawing, often referred to as beyond-planar graph drawing. One of the most studied types of drawings in this area are the $k$-planar drawings $(k \geq 1)$, where each edge cannot cross more than $k$ times. We generalize $k$-planar drawings, by introducing the new family of min-$k$-planar drawings. In a min-$k$-planar drawing edges can cross an arbitrary number of times, but for any two crossing edges, one of the two must have no more than $k$ crossings. We prove a general upper bound on the number of edges of min-$k$-planar drawings, a finer upper bound for $k=3$, and tight upper bounds for $k=1,2$. Also, we study the inclusion relations between min-$k$-planar graphs (i.e., graphs admitting min-$k$-planar drawings) and $k$-planar graphs.
For a skew polynomial ring $R=A[X;\theta,\delta]$ where $A$ is a commutative frobenius ring, $\theta$ an endomorphism of $A$ and $\delta$ a $\theta$-derivation of $A$, we consider cyclic left module codes $\mathcal{C}=Rg/Rf\subset R/Rf$ where $g$ is a left and right divisor of $f$ in $R$. In this paper we derive a parity check matrix when $A$ is a finite commutative frobenius ring using only the framework of skew polynomial rings. We consider rings $A=B[a_1,\ldots,a_s]$ which are free $B$-algebras where the restriction of $\delta$ and $\theta$ to $B$ are polynomial maps. If a Gr\"obner basis can be computed over $B$, then we show that all Euclidean and Hermitian dual-containing codes $\mathcal{C}=Rg/Rf\subset R/Rf$ can be computed using a Gr\"obner basis. We also give an algorithm to test if the dual code is again a cyclic left module code. We illustrate our approach for rings of order $4$ with non-trivial endomorphism and the Galois ring of characteristic $4$.
Preconditioning is essential in iterative methods for solving linear systems of equations. We study a nonclassic matrix condition number, the $\omega$-condition number, in the context of optimal conditioning for low rank updating of positive definite matrices. For a positive definite matrix, this condition measure is the ratio of the arithmetic and geometric means of the eigenvalues. In particular, we concentrate on linear systems with low rank updates of positive definite matrices which are close to singular. These systems arise in the contexts of nonsmooth Newton methods using generalized Jacobians. We derive an explicit formula for the optimal $\omega$-preconditioned update in this framework. Evaluating or estimating the classical condition number $\kappa$ can be expensive. We show that the $\omega$-condition number can be evaluated exactly following a Cholesky or LU factorization and it estimates the actual condition of a linear system significantly better. Moreover, our empirical results show a significant decrease in the number of iterations required for a requested accuracy in the residual during an iterative method, i.e., these results confirm the efficacy of using the $\omega$-condition number compared to the classical condition number.