Observations in various applications are frequently represented as a time series of multidimensional arrays, called tensor time series, preserving the inherent multidimensional structure. In this paper, we present a factor model approach, in a form similar to tensor CP decomposition, to the analysis of high-dimensional dynamic tensor time series. As the loading vectors are uniquely defined but not necessarily orthogonal, it is significantly different from the existing tensor factor models based on Tucker-type tensor decomposition. The model structure allows for a set of uncorrelated one-dimensional latent dynamic factor processes, making it much more convenient to study the underlying dynamics of the time series. A new high order projection estimator is proposed for such a factor model, utilizing the special structure and the idea of the higher order orthogonal iteration procedures commonly used in Tucker-type tensor factor model and general tensor CP decomposition procedures. Theoretical investigation provides statistical error bounds for the proposed methods, which shows the significant advantage of utilizing the special model structure. Simulation study is conducted to further demonstrate the finite sample properties of the estimators. Real data application is used to illustrate the model and its interpretations.
We propose to model matrix time series based on a tensor CP-decomposition. Instead of using an iterative algorithm which is the standard practice for estimating CP-decompositions, we propose a new and one-pass estimation procedure based on a generalized eigenanalysis constructed from the serial dependence structure of the underlying process. A key idea of the new procedure is to project a generalized eigenequation defined in terms of rank-reduced matrices to a lower-dimensional one with full-ranked matrices, to avoid the intricacy of the former of which the number of eigenvalues can be zero, finite and infinity. The asymptotic theory has been established under a general setting without the stationarity. It shows, for example, that all the component coefficient vectors in the CP-decomposition are estimated consistently with the different error rates, depending on the relative sizes between the dimensions of time series and the sample size. The proposed model and the estimation method are further illustrated with both simulated and real data; showing effective dimension-reduction in modelling and forecasting matrix time series.
Session-based recommendation (SBR) is a challenging task, which aims at recommending next items based on anonymous interaction sequences. Despite the superior performance of existing methods for SBR, there are still several limitations: (i) Almost all existing works concentrate on single interest extraction and fail to disentangle multiple interests of user, which easily results in suboptimal representations for SBR. (ii) Furthermore, previous methods also ignore the multi-form temporal information, which is significant signal to obtain current intention for SBR. To address the limitations mentioned above, we propose a novel method, called \emph{Temporal aware Multi-Interest Graph Neural Network} (TMI-GNN) to disentangle multi-interest and yield refined intention representations with the injection of two level temporal information. Specifically, by appending multiple interest nodes, we construct a multi-interest graph for current session, and adopt the GNNs to model the item-item relation to capture adjacent item transitions, item-interest relation to disentangle the multi-interests, and interest-item relation to refine the item representation. Meanwhile, we incorporate item-level time interval signals to guide the item information propagation, and interest-level time distribution information to assist the scattering of interest information. Experiments on three benchmark datasets demonstrate that TMI-GNN outperforms other state-of-the-art methods consistently.
This paper offers a new approach to address the model uncertainty in (potentially) divergent-dimensional single-index models (SIMs). We propose a model-averaging estimator based on cross-validation, which allows the dimension of covariates and the number of candidate models to increase with the sample size. We show that when all candidate models are misspecified, our model-averaging estimator is asymptotically optimal in the sense that its squared loss is asymptotically identical to that of the infeasible best possible averaging estimator. In a different situation where correct models are available in the model set, the proposed weighting scheme assigns all weights to the correct models in the asymptotic sense. We also extend our method to average regularized estimators and propose pre-screening methods to deal with cases with high-dimensional covariates. We illustrate the merits of our method via simulations and two empirical applications.
This paper proposes a general two directional simultaneous inference (TOSI) framework for high-dimensional models with a manifest variable or latent variable structure, for example, high-dimensional mean models, high-dimensional sparse regression models, and high-dimensional latent factors models. TOSI performs simultaneous inference on a set of parameters from two directions, one to test whether the assumed zero parameters indeed are zeros and one to test whether exist zeros in the parameter set of nonzeros. As a result, we can exactly identify whether the parameters are zeros, thereby keeping the data structure fully and parsimoniously expressed. We theoretically prove that the proposed TOSI method asymptotically controls the Type I error at the prespecified significance level and that the testing power converges to one. Simulations are conducted to examine the performance of the proposed method in finite sample situations and two real datasets are analyzed. The results show that the TOSI method is more predictive and has more interpretable estimators than existing methods.
We introduce a nonparametric graphical model for discrete node variables based on additive conditional independence. Additive conditional independence is a three way statistical relation that shares similar properties with conditional independence by satisfying the semi-graphoid axioms. Based on this relation we build an additive graphical model for discrete variables that does not suffer from the restriction of a parametric model such as the Ising model. We develop an estimator of the new graphical model via the penalized estimation of the discrete version of the additive precision operator and establish the consistency of the estimator under the ultrahigh-dimensional setting. Along with these methodological developments, we also exploit the properties of discrete random variables to uncover a deeper relation between additive conditional independence and conditional independence than previously known. The new graphical model reduces to a conditional independence graphical model under certain sparsity conditions. We conduct simulation experiments and analysis of an HIV antiretroviral therapy data set to compare the new method with existing ones.
Low-rank approximation using time-dependent bases (TDBs) has proven effective for reduced-order modeling of stochastic partial differential equations (SPDEs). In these techniques, the random field is decomposed to a set of deterministic TDBs and time-dependent stochastic coefficients. When applied to SPDEs with non-homogeneous stochastic boundary conditions (BCs), appropriate BC must be specified for each of the TDBs. However, determining BCs for TDB is not trivial because: (i) the dimension of the random BCs is different than the rank of the TDB subspace; (ii) TDB in most formulations must preserve orthonormality or orthogonality constraints and specifying BCs for TDB should not violate these constraints in the space-discretized form. In this work, we present a methodology for determining the boundary conditions for TDBs at no additional computational cost beyond that of solving the same SPDE with homogeneous BCs. Our methodology is informed by the fact the TDB evolution equations are the optimality conditions of a variational principle. We leverage the same variational principle to derive an evolution equation for the value of TDB at the boundaries. The presented methodology preserves the orthonormality or orthogonality constraints of TDBs. We present the formulation for both the dynamically bi-orthonormal (DBO) decomposition as well as the dynamically orthogonal (DO) decomposition. We show that the presented methodology can be applied to stochastic Dirichlet, Neumann, and Robin boundary conditions. We assess the performance of the presented method for linear advection-diffusion equation, Burgers' equation, and two-dimensional advection-diffusion equation with constant and temperature-dependent conduction coefficient.
We propose a theoretical framework for investigating a modeling error caused by numerical integration in the learning process of dynamics. Recently, learning equations of motion to describe dynamics from data using neural networks has been attracting attention. During such training, numerical integration is used to compare the data with the solution of the neural network model; however, discretization errors due to numerical integration prevent the model from being trained correctly. In this study, we formulate the modeling error using the Dahlquist test equation that is commonly used in the analysis of numerical methods and apply it to some of the Runge--Kutta methods.
This paper focuses on the regularization of backward time-fractional diffusion problem on unbounded domain. This problem is well-known to be ill-posed, whence the need of a regularization method in order to recover stable approximate solution. For the problem under consideration, we present a unified framework of regularization which covers some techniques such as Fourier regularization [19], mollification [12] and approximate-inverse [7]. We investigate a regularization technique with two major advantages: the simplicity of computation of the regularized solution and the avoid of truncation of high frequency components (so as to avoid undesirable oscillation on the resulting approximate-solution). Under classical Sobolev-smoothness conditions, we derive order-optimal error estimates between the approximate solution and the exact solution in the case where both the data and the model are only approximately known. In addition, an order-optimal a-posteriori parameter choice rule based on the Morozov principle is given. Finally, via some numerical experiments in two-dimensional space, we illustrate the efficiency of our regularization approach and we numerically confirm the theoretical convergence rates established in the paper.
Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries such as (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx (Trouillon et al., 2016) that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.
Robust estimation is much more challenging in high dimensions than it is in one dimension: Most techniques either lead to intractable optimization problems or estimators that can tolerate only a tiny fraction of errors. Recent work in theoretical computer science has shown that, in appropriate distributional models, it is possible to robustly estimate the mean and covariance with polynomial time algorithms that can tolerate a constant fraction of corruptions, independent of the dimension. However, the sample and time complexity of these algorithms is prohibitively large for high-dimensional applications. In this work, we address both of these issues by establishing sample complexity bounds that are optimal, up to logarithmic factors, as well as giving various refinements that allow the algorithms to tolerate a much larger fraction of corruptions. Finally, we show on both synthetic and real data that our algorithms have state-of-the-art performance and suddenly make high-dimensional robust estimation a realistic possibility.