A linear-time algorithm for generating auxiliary subgraphs for the 3-edge-connected components of a connected multigraph is presented. The algorithm uses an innovative graph contraction operation and makes only one pass over the graph. By contrast, the previously best-known algorithms make multiple passes over the graph to decompose it into its 2-edge-connected components or 2-vertex-connected components, then its 3-edge-connected components or 3-vertex-connected components, and then construct a cactus representation for the 2-cuts to generate the auxiliary subgraphs for the 3-edge-connected components.
The aim of this study is to establish a general transformation matrix between B-spline surfaces and ANCF surface elements. This study is a further study of the conversion between the ANCF and B-spline surfaces. In this paper, a general transformation matrix between the Bezier surfaces and ANCF surface element is established. This general transformation matrix essentially describes the linear relationship between ANCF and Bezier surfaces. Moreover, the general transformation matrix can help to improve the efficiency of the process to transfer the distorted configuration in the CAA back to the CAD, an urgent requirement in engineering practice. In addition, a special Bezier surface control polygon is given in this study. The Bezier surface described with this control polygon can be converted to an ANCF surface element with fewer d.o.f.. And the converted ANCF surface element with 36 d.o.f. was once addressed by Dufva and Shabana. So the special control polygon can be regarded as the geometric condition in conversion to an ANCF surface element with 36 d.o.f. Based on the fact that a B-spline surface can be seen as a set of Bezier surfaces connected together, the method to establish a general transformation matrix between the ANCF and lower-order B-spline surfaces is given. Specially, the general transformation is not in a recursive form, but in a simplified form.
In the analysis of spatially resolved transcriptomics data, detecting spatially variable genes (SVGs) is crucial. Numerous computational methods exist, but varying SVG definitions and methodologies lead to incomparable results. We review \rv{33} state-of-the-art methods, categorizing SVGs into three types: overall, cell-type-specific, and spatial-domain-marker SVGs. Our review explains the intuitions underlying these methods, summarizes their applications, and categorizes the hypothesis tests they use in the trade-off between generality and specificity for SVG detection. We discuss challenges in SVG detection and propose future directions for improvement. Our review offers insights for method developers and users, advocating for category-specific benchmarking.
A strong edge-coloring of a graph is a proper edge-coloring, in which the edges of every path of length 3 receive distinct colors; in other words, every pair of edges at distance at most 2 must be colored differently. The least number of colors needed for a strong edge-coloring of a graph is the strong chromatic index. We consider the list version of the coloring and prove that the list strong chromatic index of graphs with maximum degree 3 is at most 10. This bound is tight and improves the previous bound of 11 colors. We also consider the question whether the strong chromatic index and the list strong chromatic index always coincide. We answer it in negative by presenting an infinite family of graphs for which the two invariants differ. For the special case of the Petersen graph, we show that its list strong chromatic index equals 7, while its strong chromatic index is 5. Up to our best knowledge, this is the first known edge-coloring for which there are graphs with distinct values of the chromatic index and its list version. In relation to the above, we also initiate the study of the list version of the normal edge-coloring. A normal edge-coloring of a cubic graph is a proper edge-coloring, in which every edge is adjacent to edges colored with 4 colors or to edges colored with 2 colors. It is conjectured that 5 colors suffice for a normal edge-coloring of any bridgeless cubic graph which is equivalent to the Petersen Coloring Conjecture. Similarly to the strong edge-coloring, the list normal edge-coloring is much more restrictive and consequently for many graphs the list normal chromatic index is greater than the normal chromatic index. In particular, we show that there are cubic graphs with the list normal chromatic index at least 9, there are bridgeless cubic graphs with its value at least 8, and there are cyclically 4-edge-connected cubic graphs with value at least 7.
Image Edge detection (ED) is a base task in computer vision. While the performance of the ED algorithm has been improved greatly by introducing CNN-based models, current models still suffer from unsatisfactory precision rates especially when only a low error toleration distance is allowed. Therefore, model architecture for more precise predictions still needs an investigation. On the other hand, the unavoidable noise training data provided by humans would lead to unsatisfactory model predictions even when inputs are edge maps themselves, which also needs a solution. In this paper, more precise ED models are presented with cascaded skipping density blocks (CSDB). Our models obtain state-of-the-art(SOTA) predictions in several datasets, especially in average precision rate (AP), over a high-standard benchmark, which is confirmed by extensive experiments. Also, a novel modification on data augmentation for training is employed, which allows noiseless data to be employed in model training for the first time, and thus further improves the model performance. The relative Python codes can be found on //github.com/Hao-B-Shu/SDPED.
When the row and column variables consist of the same category in a two-way contingency table, it is specifically called a square contingency table. Since it is clear that the square contingency tables have an association structure, a primary objective is to examine symmetric relationships and transitions between variables. While various models and measures have been proposed to analyze these structures understanding changes between two variables in behavior at two-time points or cohorts, it is also necessary to require a detailed investigation of individual categories and their interrelationships, such as shifts in brand preferences. This paper proposes a novel approach to correspondence analysis (CA) for evaluating departures from symmetry in square contingency tables with nominal categories, using a power-divergence-type measure. The approach ensures that well-known divergences can also be visualized and, regardless of the divergence used, the CA plot consists of two principal axes with equal contribution rates. Additionally, the scaling is independent of sample size, making it well-suited for comparing departures from symmetry across multiple contingency tables. Confidence regions are also constructed to enhance the accuracy of the CA plot.
We propose a method utilizing physics-informed neural networks (PINNs) to solve Poisson equations that serve as control variates in the computation of transport coefficients via fluctuation formulas, such as the Green--Kubo and generalized Einstein-like formulas. By leveraging approximate solutions to the Poisson equation constructed through neural networks, our approach significantly reduces the variance of the estimator at hand. We provide an extensive numerical analysis of the estimators and detail a methodology for training neural networks to solve these Poisson equations. The approximate solutions are then incorporated into Monte Carlo simulations as effective control variates, demonstrating the suitability of the method for moderately high-dimensional problems where fully deterministic solutions are computationally infeasible.
The parametrization of wireless channels by so-called "beyond-diagonal reconfigurable intelligent surfaces" (BD-RIS) is mathematically characterized by a matrix whose off-diagonal entries are partially or fully populated. Physically, this corresponds to tunable coupling mechanisms between the RIS elements that originate from the RIS control circuit. Here, we derive a physics-compliant diagonal representation for BD-RIS-parametrized channels. Recognizing that the RIS control circuit, irrespective of its detailed architecture, can always be represented as a multi-port network with auxiliary ports terminated by tunable individual loads, we physics-compliantly express the BD-RIS-parametrized channel as a multi-port chain cascade of i) radio environment, ii) static parts of the control circuit, and iii) individually tunable loads. Thus, the cascade of the former two systems is terminated by a system that is mathematically always characterized by a diagonal matrix. This physics-compliant diagonal representation implies that existing algorithms for channel estimation and optimization for conventional ("diagonal") RIS can be readily applied to BD-RIS scenarios. We demonstrate this in an experimentally grounded case study. Importantly, we highlight that, operationally, an ambiguous characterization of the cascade of radio environment and the static parts of the control circuit is required, but not the breakdown into the characteristics of its two constituent systems nor the lifting of the ambiguities. Nonetheless, we demonstrate how to derive or estimate the characteristics of the static parts of the control circuit for pedagogical purposes. The diagonal representation of BD-RIS-parametrized channels also enables their treatment with coupled-dipole-based models. We furthermore derive the assumptions under which the physics-compliant BD-RIS model simplifies to the widespread linear cascaded model.
To avoid ineffective collisions between the equilibrium states, the hybrid method with deviational particles (HDP) has been proposed to integrate the Fokker-Planck-Landau system, while leaving a new issue in sampling deviational particles from the high-dimensional source term. In this paper, we present an adaptive sampling (AS) strategy that first adaptively reconstructs a piecewise constant approximation of the source term based on sequential clustering via discrepancy estimation, and then samples deviational particles directly from the resulting adaptive piecewise constant function without rejection. The mixture discrepancy, which can be easily calculated thanks to its explicit analytical expression, is employed as a measure of uniformity instead of the star discrepancy the calculation of which is NP-hard. The resulting method, dubbed the HDP-AS method, runs approximately ten times faster than the HDP method while keeping the same accuracy in the Landau damping, two stream instability, bump on tail and Rosenbluth's test problem.
In causal inference on directed acyclic graphs, the orientation of edges is in general only recovered up to Markov equivalence classes. We study Markov equivalence classes of uniformly random directed acyclic graphs. Using a tower decomposition, we show that the ratio between the number of Markov equivalence classes and directed acyclic graphs approaches a positive constant when the number of sites goes to infinity. For a typical directed acyclic graph, the expected number of elements in its Markov equivalence class remains bounded. More precisely, we prove that for a uniformly chosen directed acyclic graph, the size of its Markov equivalence class has super-polynomial tails.
The consensus problem in distributed computing involves a network of agents aiming to compute the average of their initial vectors through local communication, represented by an undirected graph. This paper focuses on the studying of this problem using an average-case analysis approach, particularly over regular graphs. Traditional algorithms for solving the consensus problem often rely on worst-case performance evaluation scenarios, which may not reflect typical performance in real-world applications. Instead, we apply average-case analysis, focusing on the expected spectral distribution of eigenvalues to obtain a more realistic view of performance. Key contributions include deriving the optimal method for consensus on regular graphs, showing its relation to the Heavy Ball method, analyzing its asymptotic convergence rate, and comparing it to various first-order methods through numerical experiments.