We study the behavior of a label propagation algorithm (LPA) on the Erd\H{o}s-R\'enyi random graph $\mathcal{G}(n,p)$. Initially, given a network, each vertex starts with a random label in the interval $[0,1]$. Then, in each round of LPA, every vertex switches its label to the majority label in its neighborhood (including its own label). At the first round, ties are broken towards smaller labels, while at each of the next rounds, ties are broken uniformly at random. The algorithm terminates once all labels stay the same in two consecutive iterations. LPA is successfully used in practice for detecting communities in networks (corresponding to vertex sets with the same label after termination of the algorithm). Perhaps surprisingly, LPA's performance on dense random graphs is hard to analyze, and so far convergence to consenus was known only when $np\ge n^{3/4+\varepsilon}$. By a very careful multi-stage exposure of the edges, we break this barrier and show that, when $np \ge n^{5/8+\varepsilon}$, a.a.s. the algorithm terminates with a single label. Moreover, we show that, if $np\gg n^{2/3}$, a.a.s. this label is the smallest one, whereas if $n^{5/8+\varepsilon}\le np\ll n^{2/3}$, the surviving label is a.a.s. not the smallest one.
We consider the community recovery problem on a multilayer variant of the hypergraph stochastic block model (HSBM). Each layer is associated with an independent realization of a d-uniform HSBM on N vertices. Given the similarity matrix containing the aggregated number of hyperedges incident to each pair of vertices, the goal is to obtain a partition of the N vertices into disjoint communities. In this work, we investigate a semidefinite programming (SDP) approach and obtain information-theoretic conditions on the model parameters that guarantee exact recovery both in the assortative and the disassortative cases.
The contraction$^*$-depth is the matroid depth parameter analogous to tree-depth of graphs. We establish the matroid analogue of the classical graph theory result asserting that the tree-depth of a graph $G$ is the minimum height of a rooted forest whose closure contains $G$ by proving the following for every matroid $M$ (except the trivial case when $M$ consists of loops and bridges only): the contraction$^*$-depth of $M$ plus one is equal to the minimum contraction-depth of a matroid containing $M$ as a restriction.
A $k$-uniform hypergraph $H = (V, E)$ is $k$-partite if $V$ can be partitioned into $k$ sets $V_1, \ldots, V_k$ such that every edge in $E$ contains precisely one vertex from each $V_i$. We call such a graph $n$-balanced if $|V_i| = n$ for each $i$. An independent set $I$ in $H$ is balanced if $|I\cap V_i| = |I|/k$ for each $i$, and a coloring is balanced if each color class induces a balanced independent set in $H$. In this paper, we provide a lower bound on the balanced independence number $\alpha_b(H)$ in terms of the average degree $D = |E|/n$, and an upper bound on the balanced chromatic number $\chi_b(H)$ in terms of the maximum degree $\Delta$. Our results match those of recent work of Chakraborti for $k = 2$.
This work is concerned with the analysis of a space-time finite element discontinuous Galerkin method on polytopal meshes (XT-PolydG) for the numerical discretization of wave propagation in coupled poroelastic-elastic media. The mathematical model consists of the low-frequency Biot's equations in the poroelastic medium and the elastodynamics equation for the elastic one. To realize the coupling, suitable transmission conditions on the interface between the two domains are (weakly) embedded in the formulation. The proposed PolydG discretization in space is then coupled with a dG time integration scheme, resulting in a full space-time dG discretization. We present the stability analysis for both the continuous and the semidiscrete formulations, and we derive error estimates for the semidiscrete formulation in a suitable energy norm. The method is applied to a wide set of numerical test cases to verify the theoretical bounds. Examples of physical interest are also presented to investigate the capability of the proposed method in relevant geophysical scenarios.
We propose a new method for the construction of layer-adapted meshes for singularly perturbed differential equations (SPDEs), based on mesh partial differential equations (MPDEs) that incorporate \emph{a posteriori} solution information. There are numerous studies on the development of parameter robust numerical methods for SPDEs that depend on the layer-adapted mesh of Bakhvalov. In~\citep{HiMa2021}, a novel MPDE-based approach for constructing a generalisation of these meshes was proposed. Like with most layer-adapted mesh methods, the algorithms in that article depended on detailed derivations of \emph{a priori} bounds on the SPDE's solution and its derivatives. In this work we extend that approach so that it instead uses \emph{a posteriori} computed estimates of the solution. We present detailed algorithms for the efficient implementation of the method, and numerical results for the robust solution of two-parameter reaction-convection-diffusion problems, in one and two dimensions. We also provide full FEniCS code for a one-dimensional example.
We study the numerical approximation of a coupled hyperbolic-parabolic system by a family of discontinuous Galerkin space-time finite element methods. The model is rewritten as a first-order evolutionary problem that is treated by the unified abstract solution theory of R.\ Picard. To preserve the mathematical structure of the evolutionary equation on the fully discrete level, suitable generalizations of the distribution gradient and divergence operators on broken polynomial spaces on which the discontinuous Galerkin approach is built on are defined. Well-posedness of the fully discrete problem and error estimates for the discontinuous Galerkin approximation in space and time are proved.
Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables, which is a component-based approach aiming to study the relationships between several sets of variables. Sparse generalized canonical correlation analysis (SGCCA) (proposed in Tenenhaus et al. (2014)), combines RGCCA with an `1-penalty, in which blocks are not necessarily fully connected, makes SGCCA a flexible method for analyzing a wide variety of practical problems, such as biology, chemistry, sensory analysis, marketing, food research, etc. In Tenenhaus et al. (2014), an iterative algorithm for SGCCA was designed based on the solution to the subproblem (LM-P1 for short) of maximizing a linear function on the intersection of an `1-norm ball and a unit `2-norm sphere proposed in Witten et al. (2009). However, the solution to the subproblem (LM-P1) proposed in Witten et al. (2009) is not correct, which may become the reason that the iterative algorithm for SGCCA is slow and not always convergent. For this, we first characterize the solution to the subproblem LM-P1, and the subproblems LM-P2 and LM-P3, which maximize a linear function on the intersection of an `1-norm sphere and a unit `2-norm sphere, and an `1-norm ball and a unit `2-norm sphere, respectively. Then we provide more efficient block coordinate descent (BCD) algorithms for SGCCA and its two variants, called SGCCA-BCD1, SGCCA-BCD2 and SGCCA-BCD3, corresponding to the subproblems LM-P1, LM-P2 and LM-P3, respectively, prove that they all globally converge to their stationary points. We further propose gradient projected (GP) methods for SGCCA and its two variants when using the Horst scheme, called SGCCA-GP1, SGCCA-GP2 and SGCCA-GP3, corresponding to the subproblems LM-P1, LM-P2 and LM-P3, respectively, and prove that they all
Given a vector dataset $\mathcal{X}$ and a query vector $\vec{x}_q$, graph-based Approximate Nearest Neighbor Search (ANNS) aims to build a graph index $G$ and approximately return vectors with minimum distances to $\vec{x}_q$ by searching over $G$. The main drawback of graph-based ANNS is that a graph index would be too large to fit into the memory especially for a large-scale $\mathcal{X}$. To solve this, a Product Quantization (PQ)-based hybrid method called DiskANN is proposed to store a low-dimensional PQ index in memory and retain a graph index in SSD, thus reducing memory overhead while ensuring a high search accuracy. However, it suffers from two I/O issues that significantly affect the overall efficiency: (1) long routing path from an entry vertex to the query's neighborhood that results in large number of I/O requests and (2) redundant I/O requests during the routing process. We propose an optimized DiskANN++ to overcome above issues. Specifically, for the first issue, we present a query-sensitive entry vertex selection strategy to replace DiskANN's static graph-central entry vertex by a dynamically determined entry vertex that is close to the query. For the second I/O issue, we present an isomorphic mapping on DiskANN's graph index to optimize the SSD layout and propose an asynchronously optimized Pagesearch based on the optimized SSD layout as an alternative to DiskANN's beamsearch. Comprehensive experimental studies on eight real-world datasets demonstrate our DiskANN++'s superiority on efficiency. We achieve a notable 1.5 X to 2.2 X improvement on QPS compared to DiskANN, given the same accuracy constraint.
Austrin showed that the approximation ratio $\beta\approx 0.94016567$ obtained by the MAX 2-SAT approximation algorithm of Lewin, Livnat and Zwick (LLZ) is optimal modulo the Unique Games Conjecture (UGC) and modulo a Simplicity Conjecture that states that the worst performance of the algorithm is obtained on so called simple configurations. We prove Austrin's conjecture, thereby showing the optimality of the LLZ approximation algorithm, relying only on the Unique Games Conjecture. Our proof uses a combination of analytic and computational tools. We also present new approximation algorithms for two restrictions of the MAX 2-SAT problem. For MAX HORN-$\{1,2\}$-SAT, i.e., MAX CSP$(\{x\lor y,\bar{x}\lor y,x,\bar{x}\})$, in which clauses are not allowed to contain two negated literals, we obtain an approximation ratio of $0.94615981$. For MAX CSP$(\{x\lor y,x,\bar{x}\})$, i.e., when 2-clauses are not allowed to contain negated literals, we obtain an approximation ratio of $0.95397990$. By adapting Austrin's and our arguments for the MAX 2-SAT problem we show that these two approximation ratios are also tight, modulo only the UGC conjecture. This completes a full characterization of the approximability of the MAX 2-SAT problem and its restrictions.
A Petrov-Galerkin finite element method is constructed for a singularly perturbed elliptic problem in two space dimensions. The solution contains a regular boundary layer and two characteristic boundary layers. Exponential splines are used as test functions in one coordinate direction and are combined with bilinear trial functions defined on a Shishkin mesh. The resulting numerical method is shown to be a stable parameter-uniform numerical method that achieves a higher order of convergence compared to upwinding on the same mesh.