In this paper, we propose a computationally valid and theoretically justified methods, the likelihood ratio scan method (LRSM), for estimating multiple change-points in a piecewise stationary generalized conditional integer-valued autoregressive process. LRSM with the usual window parameter $h$ is more satisfied to be used in long-time series with few and even change-points vs. LRSM with the multiple window parameter $h_{mix}$ performs well in short-time series with large and dense change-points. The computational complexity of LRSM can be efficiently performed with order $O((\log n)^3 n)$. Moreover, two bootstrap procedures, namely parametric and block bootstrap, are developed for constructing confidence intervals (CIs) for each of the change-points. Simulation experiments and real data analysis show that the LRSM and bootstrap procedures have excellent performance and are consistent with the theoretical analysis.
In this paper, to address the optimization problem on a compact matrix manifold, we introduce a novel algorithmic framework called the Transformed Gradient Projection (TGP) algorithm, using the projection onto this compact matrix manifold. Compared with the existing algorithms, the key innovation in our approach lies in the utilization of a new class of search directions and various stepsizes, including the Armijo, nonmonotone Armijo, and fixed stepsizes, to guide the selection of the next iterate. Our framework offers flexibility by encompassing the classical gradient projection algorithms as special cases, and intersecting the retraction-based line-search algorithms. Notably, our focus is on the Stiefel or Grassmann manifold, revealing that many existing algorithms in the literature can be seen as specific instances within our proposed framework, and this algorithmic framework also induces several new special cases. Then, we conduct a thorough exploration of the convergence properties of these algorithms, considering various search directions and stepsizes. To achieve this, we extensively analyze the geometric properties of the projection onto compact matrix manifolds, allowing us to extend classical inequalities related to retractions from the literature. Building upon these insights, we establish the weak convergence, convergence rate, and global convergence of TGP algorithms under three distinct stepsizes. In cases where the compact matrix manifold is the Stiefel or Grassmann manifold, our convergence results either encompass or surpass those found in the literature. Finally, through a series of numerical experiments, we observe that the TGP algorithms, owing to their increased flexibility in choosing search directions, outperform classical gradient projection and retraction-based line-search algorithms in several scenarios.
In this paper, we investigate the problem of strong approximation of the solution of SDEs in the case when the drift coefficient is given in the integral form. Such drift often appears when analyzing stochastic dynamics of optimization procedures in machine learning problems. We discuss connections of the defined randomized Euler approximation scheme with the perturbed version of the stochastic gradient descent (SGD) algorithm. We investigate its upper error bounds, in terms of the discretization parameter n and the size M of the random sample drawn at each step of the algorithm, in different subclasses of coefficients of the underlying SDE. Finally, the results of numerical experiments performed by using GPU architecture are also reported.
Principal component analysis and factor analysis are fundamental multivariate analysis methods. In this paper a unified framework to connect them is introduced. Under a general latent variable model, we present matrix optimization problems from the viewpoint of loss function minimization, and show that the two methods can be viewed as solutions to the optimization problems with specific loss functions. Specifically, principal component analysis can be derived from a broad class of loss functions including the L2 norm, while factor analysis corresponds to a modified L0 norm problem. Related problems are discussed, including algorithms, penalized maximum likelihood estimation under the latent variable model, and a principal component factor model. These results can lead to new tools of data analysis and research topics.
In this paper, we propose to decompose the canonical parameter of a multinomial model into a set of participant scores and category scores. Both sets of scores are linearly constraint to represent external information about the participants and categories. For the estimation of the parameters of the decomposition, we derive a majorization-minimization algorithm. We place special emphasis on the case where the categories represent profiles of binary response variables. In that case, the multinomial model becomes a regression model for multiple binary response variables and researchers might be interested in the relationship of an external variable for the participant (i.e., a predictor) and one of the binary response variable or in the relationship between this predictor and the association among binary response variables. We derive interpretational rules for these relationships in terms of changes in log odds or log odds ratios. Connections between our multinomial canonical decomposition and loglinear models, multinomial logistic regression, multinomial reduced rank logistic regression, and double constrained correspondence analysis are discussed. We illustrate our methodology with two empirical data sets.
In this paper, we consider the problem of experience rating within the classic Markov chain life insurance framework. We begin by investigating various multivariate mixed Poisson models with mixing distributions encompassing independent Gamma, hierarchical Gamma, and multivariate phase-type. In particular, we demonstrate how maximum likelihood estimation for these proposed models can be performed using expectation-maximization algorithms, which might be of independent interest. Subsequently, we establish a link between mixed Poisson distributions and the problem of pricing group disability insurance contracts that exhibit heterogeneity. We focus on shrinkage estimation of disability and recovery rates, taking into account sampling effects such as right-censoring. Finally, we showcase the practicality of these shrinkage estimators through a numerical study based on simulated yet realistic insurance data. Our findings highlight that by allowing for dependency between latent group effects, estimates of recovery and disability rates mutually improve, leading to enhanced predictive performance.
In this paper, we consider the problem of experience rating within the classic Markov chain life insurance framework. We begin by investigating various multivariate mixed Poisson models with mixing distributions encompassing independent Gamma, hierarchical Gamma, and multivariate phase-type. In particular, we demonstrate how maximum likelihood estimation for these proposed models can be performed using expectation-maximization algorithms, which might be of independent interest. Subsequently, we establish a link between mixed Poisson distributions and the problem of pricing group disability insurance contracts that exhibit heterogeneity. We focus on shrinkage estimation of disability and recovery rates, taking into account sampling effects such as right-censoring. Finally, we showcase the practicality of these shrinkage estimators through a numerical study based on simulated yet realistic insurance data. Our findings highlight that by allowing for dependency between latent group effects, estimates of recovery and disability rates mutually improve, leading to enhanced predictive performance.
In this paper, we propose a novel adaptive stochastic extended iterative method, which can be viewed as an improved extension of the randomized extended Kaczmarz (REK) method, for finding the unique minimum Euclidean norm least-squares solution of a given linear system. In particular, we introduce three equivalent stochastic reformulations of the linear least-squares problem: stochastic unconstrained and constrained optimization problems, and the stochastic multiobjective optimization problem. We then alternately employ the adaptive variants of the stochastic heavy ball momentum (SHBM) method, which utilize iterative information to update the parameters, to solve the stochastic reformulations. We prove that our method converges linearly in expectation, addressing an open problem in the literature related to designing theoretically supported adaptive SHBM methods. Numerical experiments show that our adaptive stochastic extended iterative method has strong advantages over the non-adaptive one.
The paper establishes an analog Whittaker-Shannon-Kotelnikov sampling theorem with fast decreasing coefficient, as well as a new modification of the corresponding interpolation formula applicable for general type non-vanishing bounded continuous signals.
Interpreting data with mathematical models is an important aspect of real-world applied mathematical modeling. Very often we are interested to understand the extent to which a particular data set informs and constrains model parameters. This question is closely related to the concept of parameter identifiability, and in this article we present a series of computational exercises to introduce tools that can be used to assess parameter identifiability, estimate parameters and generate model predictions. Taking a likelihood-based approach, we show that very similar ideas and algorithms can be used to deal with a range of different mathematical modelling frameworks. The exercises and results presented in this article are supported by a suite of open access codes that can be accessed on GitHub.
In this work, we apply multi-goal oriented error estimation to the finite element method. In particular, we use the dual weighted residual method and apply it to a model problem. This model problem consist of locally different coercive partial differential equations in a checkerboard pattern, where the solution is continuous across the interface. In addition to the error estimation, the error can be localized using a partition of unity technique. The resulting adaptive algorithm is substantiated with a numerical example.