Querying cohesive subgraphs on temporal graphs (e.g., social network, finance network, etc.) with various conditions has attracted intensive research interests recently. In this paper, we study a novel Temporal $(k,\mathcal{X})$-Core Query (TXCQ) that extends a fundamental Temporal $k$-Core Query (TCQ) proposed in our conference paper by optimizing or constraining an arbitrary metric $\mathcal{X}$ of $k$-core, such as size, engagement, interaction frequency, time span, burstiness, periodicity, etc. Our objective is to address specific TXCQ instances with conditions on different $\mathcal{X}$ in a unified algorithm framework that guarantees scalability. For that, this journal paper proposes a taxonomy of measurement $\mathcal{X}(\cdot)$ and achieve our objective using a two-phase framework while $\mathcal{X}(\cdot)$ is time-insensitive or time-monotonic. Specifically, Phase 1 still leverages the query processing algorithm of TCQ to induce all distinct $k$-cores during a given time range, and meanwhile locates the ``time zones'' in which the cores emerge. Then, Phase 2 conducts fast local search and $\mathcal{X}$ evaluation in each time zone with respect to the time insensitivity or monotonicity of $\mathcal{X}(\cdot)$. By revealing two insightful concepts named tightest time interval and loosest time interval that bound time zones, the redundant core induction and unnecessary $\mathcal{X}$ evaluation in a zone can be reduced dramatically. Our experimental results demonstrate that TXCQ can be addressed as efficiently as TCQ, which achieves the latest state-of-the-art performance, by using a general algorithm framework that leaves $\mathcal{X}(\cdot)$ as a user-defined function.
This paper develops a general asymptotic theory of series ridge estimators for spatial data collected at irregularly spaced locations within a sampling region $R_n \subset \mathbb{R}^d$. We employ a stochastic sampling design capable of generating irregularly spaced sampling sites flexibly, encompassing both pure increasing and mixed increasing domain frameworks. Specifically, we focus on a spatial trend regression model and a nonparametric regression model with spatially dependent covariates. For these models, we investigate the $L^2$-penalized series estimation of the trend and regression functions. As main results, we establish uniform and $L^2$ convergence rates and multivariate central limit theorems for general series estimators. Additionally, we show that spline and wavelet series estimators achieve optimal uniform and $L^2$ convergence rates, and propose methods for constructing confidence intervals for spline and wavelet estimators. Finally, we demonstrate that our dependence structure conditions on the underlying spatial processes include a broad class of random fields, including L\'evy-driven continuous autoregressive and moving average random fields.
Algorithms for causal discovery have recently undergone rapid advances and increasingly draw on flexible nonparametric methods to process complex data. With these advances comes a need for adequate empirical validation of the causal relationships learned by different algorithms. However, for most real data sources true causal relations remain unknown. This issue is further compounded by privacy concerns surrounding the release of suitable high-quality data. To help address these challenges, we gather a complex dataset comprising measurements from an assembly line in a manufacturing context. This line consists of numerous physical processes for which we are able to provide ground truth causal relationships on the basis of a detailed study of the underlying physics. We use the assembly line data and associated ground truth information to build a system for generation of semisynthetic manufacturing data that supports benchmarking of causal discovery methods. To accomplish this, we employ distributional random forests in order to flexibly estimate and represent conditional distributions that may be combined into joint distributions that strictly adhere to a causal model over the observed variables. The estimated conditionals and tools for data generation are made available in our Python library $\texttt{causalAssembly}$. Using the library, we showcase how to benchmark several well-known causal discovery algorithms.
For the numerical solution of Dirichlet-type boundary value problems associated to nonlinear fractional differential equations of order $\alpha \in (1,2)$ that use Caputo derivatives, we suggest to employ shooting methods. In particular, we demonstrate that the so-called proportional secting technique for selecting the required initial values leads to numerical schemes that converge to high accuracy in a very small number of shooting iterations, and we provide an explanation of the analytical background for this favourable numerical behaviour.
The classic string indexing problem is to preprocess a string $S$ into a compact data structure that supports efficient subsequent pattern matching queries, that is, given a pattern string $P$, report all occurrences of $P$ within $S$. In this paper, we study a basic and natural extension of string indexing called the string indexing for top-$k$ close consecutive occurrences problem (SITCCO). Here, a consecutive occurrence is a pair $(i,j)$, $i < j$, such that $P$ occurs at positions $i$ and $j$ in $S$ and there is no occurrence of $P$ between $i$ and $j$, and their distance is defined as $j-i$. Given a pattern $P$ and a parameter $k$, the goal is to report the top-$k$ consecutive occurrences of $P$ in $S$ of minimal distance. The challenge is to compactly represent $S$ while supporting queries in time close to the length of $P$ and $k$. We give three time-space trade-offs for the problem. Let $n$ be the length of $S$, $m$ the length of $P$, and $\epsilon\in(0,1]$. Our first result achieves $O(n\log n)$ space and optimal query time of $O(m+k)$. Our second and third results achieve linear space and query times either $O(m+k^{1+\epsilon})$ or $O(m + k \log^{1+\epsilon} n)$. Along the way, we develop several techniques of independent interest, including a new translation of the problem into a line segment intersection problem and a new recursive clustering technique for trees.
Two Latin squares of order $n$ are $r$-orthogonal if, when superimposed, there are exactly $r$ distinct ordered pairs. The spectrum of all values of $r$ for Latin squares of order $n$ is known. A Latin square $A$ of order $n$ is $r$-self-orthogonal if $A$ and its transpose are $r$-orthogonal. The spectrum of all values of $r$ is known for all orders $n\ne 14$. We develop randomized algorithms for computing pairs of $r$-orthogonal Latin squares of order $n$ and algorithms for computing $r$-self-orthogonal Latin squares of order $n$.
The $\lambda$$\Pi$-calculus modulo theory is an extension of simply typed $\lambda$-calculus with dependent types and user-defined rewrite rules. We show that it is possible to replace the rewrite rules of a theory of the $\lambda$$\Pi$-calculus modulo theory by equational axioms, when this theory features the notions of proposition and proof, while maintaining the same expressiveness. To do so, we introduce in the target theory a heterogeneous equality, and we build a translation that replaces each use of the conversion rule by the insertion of a transport. At the end, the theory with rewrite rules is a conservative extension of the theory with axioms.
We consider the paradigm of unsupervised anomaly detection, which involves the identification of anomalies within a dataset in the absence of labeled examples. Though distance-based methods are top-performing for unsupervised anomaly detection, they suffer heavily from the sensitivity to the choice of the number of the nearest neighbors. In this paper, we propose a new distance-based algorithm called bagged regularized $k$-distances for anomaly detection (BRDAD) converting the unsupervised anomaly detection problem into a convex optimization problem. Our BRDAD algorithm selects the weights by minimizing the surrogate risk, i.e., the finite sample bound of the empirical risk of the bagged weighted $k$-distances for density estimation (BWDDE). This approach enables us to successfully address the sensitivity challenge of the hyperparameter choice in distance-based algorithms. Moreover, when dealing with large-scale datasets, the efficiency issues can be addressed by the incorporated bagging technique in our BRDAD algorithm. On the theoretical side, we establish fast convergence rates of the AUC regret of our algorithm and demonstrate that the bagging technique significantly reduces the computational complexity. On the practical side, we conduct numerical experiments on anomaly detection benchmarks to illustrate the insensitivity of parameter selection of our algorithm compared with other state-of-the-art distance-based methods. Moreover, promising improvements are brought by applying the bagging technique in our algorithm on real-world datasets.
When estimating the parameters in functional ARMA, GARCH and invertible, linear processes, covariance and lagged cross-covariance operators of processes in Cartesian product spaces appear. Such operators have been consistenly estimated in recent years, either less generally or under a strong condition. This article extends the existing literature by deriving explicit upper bounds for estimation errors for lagged covariance and lagged cross-covariance operators of processes in general Cartesian product Hilbert spaces, based on the mild weak dependence condition $L^p$-$m$-approximability. The upper bounds are stated for each lag, Cartesian power(s) and sample size, where the two processes in the context of lagged cross-covariance operators can take values in different spaces. General consequences of our results are also mentioned.
Click-through rate (CTR) prediction plays a critical role in recommender systems and online advertising. The data used in these applications are multi-field categorical data, where each feature belongs to one field. Field information is proved to be important and there are several works considering fields in their models. In this paper, we proposed a novel approach to model the field information effectively and efficiently. The proposed approach is a direct improvement of FwFM, and is named as Field-matrixed Factorization Machines (FmFM, or $FM^2$). We also proposed a new explanation of FM and FwFM within the FmFM framework, and compared it with the FFM. Besides pruning the cross terms, our model supports field-specific variable dimensions of embedding vectors, which acts as soft pruning. We also proposed an efficient way to minimize the dimension while keeping the model performance. The FmFM model can also be optimized further by caching the intermediate vectors, and it only takes thousands of floating-point operations (FLOPs) to make a prediction. Our experiment results show that it can out-perform the FFM, which is more complex. The FmFM model's performance is also comparable to DNN models which require much more FLOPs in runtime.
Graph convolution networks (GCN) are increasingly popular in many applications, yet remain notoriously hard to train over large graph datasets. They need to compute node representations recursively from their neighbors. Current GCN training algorithms suffer from either high computational costs that grow exponentially with the number of layers, or high memory usage for loading the entire graph and node embeddings. In this paper, we propose a novel efficient layer-wise training framework for GCN (L-GCN), that disentangles feature aggregation and feature transformation during training, hence greatly reducing time and memory complexities. We present theoretical analysis for L-GCN under the graph isomorphism framework, that L-GCN leads to as powerful GCNs as the more costly conventional training algorithm does, under mild conditions. We further propose L^2-GCN, which learns a controller for each layer that can automatically adjust the training epochs per layer in L-GCN. Experiments show that L-GCN is faster than state-of-the-arts by at least an order of magnitude, with a consistent of memory usage not dependent on dataset size, while maintaining comparable prediction performance. With the learned controller, L^2-GCN can further cut the training time in half. Our codes are available at //github.com/Shen-Lab/L2-GCN.