Jones proposed the study of two subfactors of a $II_1$ factor as a quantization of two closed subspaces in a Hilbert space. The Pimsner-Popa probabilistic constant, Sano-Watatani angle, interior and exterior angle, and Connes-St{\o}rmer relative entropy (along with a slight variant of it) are a few key invariants for pair of subfactors that analyze their relative position. In practice, however, the explicit computation of these invariants is often difficult. In this article, we provide an in-depth analysis of a special class of two subfactors, namely a pair of spin model subfactors of the hyperfinite type $II_1$ factor $R$. We first characterize when two distinct $n\times n$ complex Hadamard matrices give rise to distinct spin model subfactors. Then, a detailed investigation has been carried out for pairs of (Hadamard equivalent) complex Hadamard matrices of order $2\times 2$ as well as Hadamard inequivalent complex Hadamard matrices of order $4\times 4$. To the best of our knowledge, this article is the first instance in the literature where the exact value of the Pimsner-Popa probabilistic constant and the noncommutative relative entropy for pairs of (non-trivial) subfactors have been obtained. Furthermore, we prove the factoriality of the intersection of the corresponding pair of subfactors using the `commuting square technique'. En route, we construct an infinite family of potentially new subfactors of $R$. All these subfactors are irreducible with Jones index $4n,n\geq 2$. As a consequence, the rigidity of the interior angle between the spin model subfactors is established. Last but not least, we explicitly compute the Sano-Watatani angle between the spin model subfactors.
In this paper, numerical methods based on Vieta-Lucas wavelets are proposed for solving a class of singular differential equations. The operational matrix of the derivative for Vieta-Lucas wavelets is derived. It is employed to reduce the differential equations into the system of algebraic equations by applying the ideas of the collocation scheme, Tau scheme, and Galerkin scheme respectively. Furthermore, the convergence analysis and error estimates for Vieta-Lucas wavelets are performed. In the numerical section, the comparative analysis is presented among the different versions of the proposed Vieta-Lucas wavelet methods, and the accuracy of the approaches is evaluated by computing the errors and comparing them to the existing findings.
We study the behavior of a label propagation algorithm (LPA) on the Erd\H{o}s-R\'enyi random graph $\mathcal{G}(n,p)$. Initially, given a network, each vertex starts with a random label in the interval $[0,1]$. Then, in each round of LPA, every vertex switches its label to the majority label in its neighborhood (including its own label). At the first round, ties are broken towards smaller labels, while at each of the next rounds, ties are broken uniformly at random. The algorithm terminates once all labels stay the same in two consecutive iterations. LPA is successfully used in practice for detecting communities in networks (corresponding to vertex sets with the same label after termination of the algorithm). Perhaps surprisingly, LPA's performance on dense random graphs is hard to analyze, and so far convergence to consenus was known only when $np\ge n^{3/4+\varepsilon}$. By a very careful multi-stage exposure of the edges, we break this barrier and show that, when $np \ge n^{5/8+\varepsilon}$, a.a.s. the algorithm terminates with a single label. Moreover, we show that, if $np\gg n^{2/3}$, a.a.s. this label is the smallest one, whereas if $n^{5/8+\varepsilon}\le np\ll n^{2/3}$, the surviving label is a.a.s. not the smallest one.
Stabilizer-free $P_k$ virtual elements are constructed on polygonal and polyhedral meshes. Here the interpolating space is the space of continuous $P_k$ polynomials on a triangular-subdivision of each polygon, or a tetrahedral-subdivision of each polyhedron. With such an accurate and proper interpolation, the stabilizer of the virtual elements is eliminated while the system is kept positive-definite. We show that the stabilizer-free virtual elements converge at the optimal order in 2D and 3D. Numerical examples are computed, validating the theory.
We propose a simple and efficient local algorithm for graph isomorphism which succeeds for a large class of sparse graphs. This algorithm produces a low-depth canonical labeling, which is a labeling of the vertices of the graph that identifies its isomorphism class using vertices' local neighborhoods. Prior work by Czajka and Pandurangan showed that the degree profile of a vertex (i.e., the sorted list of the degrees of its neighbors) gives a canonical labeling with high probability when $n p_n = \omega( \log^{4}(n) / \log \log n )$ (and $p_{n} \leq 1/2$); subsequently, Mossel and Ross showed that the same holds when $n p_n = \omega( \log^{2}(n) )$. We first show that their analysis essentially cannot be improved: we prove that when $n p_n = o( \log^{2}(n) / (\log \log n)^{3} )$, with high probability there exist distinct vertices with isomorphic $2$-neighborhoods. Our first main result is a positive counterpart to this, showing that $3$-neighborhoods give a canonical labeling when $n p_n \geq (1+\delta) \log n$ (and $p_n \leq 1/2$); this improves a recent result of Ding, Ma, Wu, and Xu, completing the picture above the connectivity threshold. Our second main result is a smoothed analysis of graph isomorphism, showing that for a large class of deterministic graphs, a small random perturbation ensures that $3$-neighborhoods give a canonical labeling with high probability. While the worst-case complexity of graph isomorphism is still unknown, this shows that graph isomorphism has polynomial smoothed complexity.
Anomalous diffusion in the presence or absence of an external force field is often modelled in terms of the fractional evolution equations, which can involve the hyper-singular source term. For this case, conventional time stepping methods may exhibit a severe order reduction. Although a second-order numerical algorithm is provided for the subdiffusion model with a simple hyper-singular source term $t^{\mu}$, $-2<\mu<-1$ in [arXiv:2207.08447], the convergence analysis remain to be proved. To fill in these gaps, we present a simple and robust smoothing method for the hyper-singular source term, where the Hadamard finite-part integral is introduced. This method is based on the smoothing/ID$m$-BDF$k$ method proposed by the authors [Shi and Chen, SIAM J. Numer. Anal., to appear] for subdiffusion equation with a weakly singular source term. We prove that the $k$th-order convergence rate can be restored for the diffusion-wave case $\gamma \in (1,2)$ and sketch the proof for the subdiffusion case $\gamma \in (0,1)$, even if the source term is hyper-singular and the initial data is not compatible. Numerical experiments are provided to confirm the theoretical results.
We show that the problem of counting the number of $n$-variable unate functions reduces to the problem of counting the number of $n$-variable monotone functions. Using recently obtained results on $n$-variable monotone functions, we obtain counts of $n$-variable unate functions up to $n=9$. We use an enumeration strategy to obtain the number of $n$-variable balanced monotone functions up to $n=7$. We show that the problem of counting the number of $n$-variable balanced unate functions reduces to the problem of counting the number of $n$-variable balanced monotone functions, and consequently, we obtain the number of $n$-variable balanced unate functions up to $n=7$. Using enumeration, we obtain the numbers of equivalence classes of $n$-variable balanced monotone functions, unate functions and balanced unate functions up to $n=6$. Further, for each of the considered sub-class of $n$-variable monotone and unate functions, we also obtain the corresponding numbers of $n$-variable non-degenerate functions.
In highly diffusion regimes when the mean free path $\varepsilon$ tends to zero, the radiative transfer equation has an asymptotic behavior which is governed by a diffusion equation and the corresponding boundary condition. Generally, a numerical scheme for solving this problem has the truncation error containing an $\varepsilon^{-1}$ contribution, that leads to a nonuniform convergence for small $\varepsilon$. Such phenomenons require high resolutions of discretizations, which degrades the performance of the numerical scheme in the diffusion limit. In this paper, we first provide a--priori estimates for the scaled spherical harmonic ($P_N$) radiative transfer equation. Then we present an error analysis for the spherical harmonic discontinuous Galerkin (DG) method of the scaled radiative transfer equation showing that, under some mild assumptions, its solutions converge uniformly in $\varepsilon$ to the solution of the scaled radiative transfer equation. We further present an optimal convergence result for the DG method with the upwind flux on Cartesian grids. Error estimates of $\left(1+\mathcal{O}(\varepsilon)\right)h^{k+1}$ (where $h$ is the maximum element length) are obtained when tensor product polynomials of degree at most $k$ are used.
We consider the weighted least squares spline approximation of a noisy dataset. By interpreting the weights as a probability distribution, we maximize the associated entropy subject to the constraint that the mean squared error is prescribed to a desired (small) value. Acting on this error yields a robust regression method that automatically detects and removes outliers from the data during the fitting procedure, by assigning them a very small weight. We discuss the use of both spline functions and spline curves. A number of numerical illustrations have been included to disclose the potentialities of the maximal-entropy approach in different application fields.
Solving multiphysics-based inverse problems for geological carbon storage monitoring can be challenging when multimodal time-lapse data are expensive to collect and costly to simulate numerically. We overcome these challenges by combining computationally cheap learned surrogates with learned constraints. Not only does this combination lead to vastly improved inversions for the important fluid-flow property, permeability, it also provides a natural platform for inverting multimodal data including well measurements and active-source time-lapse seismic data. By adding a learned constraint, we arrive at a computationally feasible inversion approach that remains accurate. This is accomplished by including a trained deep neural network, known as a normalizing flow, which forces the model iterates to remain in-distribution, thereby safeguarding the accuracy of trained Fourier neural operators that act as surrogates for the computationally expensive multiphase flow simulations involving partial differential equation solves. By means of carefully selected experiments, centered around the problem of geological carbon storage, we demonstrate the efficacy of the proposed constrained optimization method on two different data modalities, namely time-lapse well and time-lapse seismic data. While permeability inversions from both these two modalities have their pluses and minuses, their joint inversion benefits from either, yielding valuable superior permeability inversions and CO2 plume predictions near, and far away, from the monitoring wells.
Standard multiparameter eigenvalue problems (MEPs) are systems of $k\ge 2$ linear $k$-parameter square matrix pencils. Recently, a new form of multiparameter eigenvalue problems has emerged: a rectangular MEP (RMEP) with only one multivariate rectangular matrix pencil, where we are looking for combinations of the parameters for which the rank of the pencil is not full. Applications include finding the optimal least squares autoregressive moving average (ARMA) model and the optimal least squares realization of autonomous linear time-invariant (LTI) dynamical system. For linear and polynomial RMEPs, we give the number of solutions and show how these problems can be solved numerically by a transformation into a standard MEP. For the transformation we provide new linearizations for quadratic multivariate matrix polynomials with a specific structure of monomials and consider mixed systems of rectangular and square multivariate matrix polynomials. This numerical approach seems computationally considerably more attractive than the block Macaulay method, the only other currently available numerical method for polynomial RMEPs.