Mortality forecasting plays a pivotal role in insurance and financial risk management of life insurers, pension funds, and social securities. Mortality data is usually high-dimensional in nature and favors factor model approaches to modelling and forecasting. This paper introduces a new forecast-driven hierarchical factor model (FHFM) customized for mortality forecasting. Compared to existing models, which only capture the cross-sectional variation or time-serial dependence in the dimension reduction step, the new model captures both features efficiently under a hierarchical structure, and provides insights into the understanding of dynamic variation of mortality patterns over time. By comparing with static PCA utilized in Lee and Carter 1992, dynamic PCA introduced in Lam et al. 2011, as well as other existing mortality modelling methods, we find that this approach provides both better estimation results and superior out-of-sample forecasting performance. Simulation studies further illustrate the advantages of the proposed model based on different data structures. Finally, empirical studies using the US mortality data demonstrate the implications and significance of this new model in life expectancy forecasting and life annuities pricing.
Simultaneous localization and mapping (SLAM) is a critical capability for any autonomous underwater vehicle (AUV). However, robust, accurate state estimation is still a work in progress when using low-cost sensors. We propose enhancing a typical low-cost sensor package using widely available and often free prior information; overhead imagery. Given an AUV's sonar image and a partially overlapping, globally-referenced overhead image, we propose using a convolutional neural network (CNN) to generate a synthetic overhead image predicting the above-surface appearance of the sonar image contents. We then use this synthetic overhead image to register our observations to the provided global overhead image. Once registered, the transformation is introduced as a factor into a pose SLAM factor graph. We use a state-of-the-art simulation environment to perform validation over a series of benchmark trajectories and quantitatively show the improved accuracy of robot state estimation using the proposed approach. We also show qualitative outcomes from a real AUV field deployment. Video attachment: //youtu.be/_uWljtp58ks
Gaussian Graphical Models (GGMs) are widely used for exploratory data analysis in various fields such as genomics, ecology, psychometry. In a high-dimensional setting, when the number of variables exceeds the number of observations by several orders of magnitude, the estimation of GGM is a difficult and unstable optimization problem. Clustering of variables or variable selection is often performed prior to GGM estimation. We propose a new method allowing to simultaneously infer a hierarchical clustering structure and the graphs describing the structure of independence at each level of the hierarchy. This method is based on solving a convex optimization problem combining a graphical lasso penalty with a fused type lasso penalty. Results on real and synthetic data are presented.
Large longitudinal studies provide lots of valuable information, especially in medical applications. A problem which must be taken care of in order to utilize their full potential is that of correlation between intra-subject measurements taken at different times. For data in Euclidean space this can be done with hierarchical models, that is, models that consider intra-subject and between-subject variability in two different stages. Nevertheless, data from medical studies often takes values in nonlinear manifolds. Here, as a first step, geodesic hierarchical models have been developed that generalize the linear ansatz by assuming that time-induced intra-subject variations occur along a generalized straight line in the manifold. However, this is often not the case (e.g., periodic motion or processes with saturation). We propose a hierarchical model for manifold-valued data that extends this to include trends along higher-order curves, namely B\'ezier splines in the manifold. To this end, we present a principled way of comparing shape trends in terms of a functional-based Riemannian metric. Remarkably, this metric allows efficient, yet simple computations by virtue of a variational time discretization requiring only the solution of regression problems. We validate our model on longitudinal data from the osteoarthritis initiative, including classification of disease progression.
Evaluating predictive models is a crucial task in predictive analytics. This process is especially challenging with time series data where the observations show temporal dependencies. Several studies have analysed how different performance estimation methods compare with each other for approximating the true loss incurred by a given forecasting model. However, these studies do not address how the estimators behave for model selection: the ability to select the best solution among a set of alternatives. We address this issue and compare a set of estimation methods for model selection in time series forecasting tasks. We attempt to answer two main questions: (i) how often is the best possible model selected by the estimators; and (ii) what is the performance loss when it does not. We empirically found that the accuracy of the estimators for selecting the best solution is low, and the overall forecasting performance loss associated with the model selection process ranges from 1.2% to 2.3%. We also discovered that some factors, such as the sample size, are important in the relative performance of the estimators.
As a traditional and widely-adopted mortality rate projection technique, by representing the log mortality rate as a simple bilinear form $\log(m_{x,t})=a_x+b_xk_t$. The Lee-Carter model has been extensively studied throughout the past 30 years, however, the performance of the model in the presence of outliers has been paid little attention, particularly for the parameter estimation of $b_x$. In this paper, we propose a robust estimation method for Lee-Carter model by formulating it as a probabilistic principal component analysis (PPCA) with multivariate $t$-distributions, and an efficient expectation-maximization (EM) algorithm for implementation. The advantages of the method are threefold. It yields significantly more robust estimates of both $b_x$ and $k_t$, preserves the fundamental interpretation for $b_x$ as the first principal component as in the traditional approach and is flexible to be integrated into other existing time series models for $k_t$. The parameter uncertainties are examined by adopting a standard residual bootstrap. A simulation study based on Human Mortality Database shows superior performance of the proposed model compared to other conventional approaches.
Neural density estimators have proven remarkably powerful in performing efficient simulation-based Bayesian inference in various research domains. In particular, the BayesFlow framework uses a two-step approach to enable amortized parameter estimation in settings where the likelihood function is implicitly defined by a simulation program. But how faithful is such inference when simulations are poor representations of reality? In this paper, we conceptualize the types of model misspecification arising in simulation-based inference and systematically investigate the performance of the BayesFlow framework under these misspecifications. We propose an augmented optimization objective which imposes a probabilistic structure on the latent data space and utilize maximum mean discrepancy (MMD) to detect potentially catastrophic misspecifications during inference undermining the validity of the obtained results. We verify our detection criterion on a number of artificial and realistic misspecifications, ranging from toy conjugate models to complex models of decision making and disease outbreak dynamics applied to real data. Further, we show that posterior inference errors increase as a function of the distance between the true data-generating distribution and the typical set of simulations in the latent summary space. Thus, we demonstrate the dual utility of MMD as a method for detecting model misspecification and as a proxy for verifying the faithfulness of amortized Bayesian inference.
Panel Vector Autoregressions (PVARs) are a popular tool for analyzing multi-country datasets. However, the number of estimated parameters can be enormous, leading to computational and statistical issues. In this paper, we develop fast Bayesian methods for estimating PVARs using integrated rotated Gaussian approximations. We exploit the fact that domestic information is often more important than international information and group the coefficients accordingly. Fast approximations are used to estimate the latter while the former are estimated with precision using Markov chain Monte Carlo techniques. We illustrate, using a huge model of the world economy, that it produces competitive forecasts quickly.
Human parsing is for pixel-wise human semantic understanding. As human bodies are underlying hierarchically structured, how to model human structures is the central theme in this task. Focusing on this, we seek to simultaneously exploit the representational capacity of deep graph networks and the hierarchical human structures. In particular, we provide following two contributions. First, three kinds of part relations, i.e., decomposition, composition, and dependency, are, for the first time, completely and precisely described by three distinct relation networks. This is in stark contrast to previous parsers, which only focus on a portion of the relations and adopt a type-agnostic relation modeling strategy. More expressive relation information can be captured by explicitly imposing the parameters in the relation networks to satisfy the specific characteristics of different relations. Second, previous parsers largely ignore the need for an approximation algorithm over the loopy human hierarchy, while we instead address an iterative reasoning process, by assimilating generic message-passing networks with their edge-typed, convolutional counterparts. With these efforts, our parser lays the foundation for more sophisticated and flexible human relation patterns of reasoning. Comprehensive experiments on five datasets demonstrate that our parser sets a new state-of-the-art on each.
Graph Neural Networks (GNNs), which generalize deep neural networks to graph-structured data, have drawn considerable attention and achieved state-of-the-art performance in numerous graph related tasks. However, existing GNN models mainly focus on designing graph convolution operations. The graph pooling (or downsampling) operations, that play an important role in learning hierarchical representations, are usually overlooked. In this paper, we propose a novel graph pooling operator, called Hierarchical Graph Pooling with Structure Learning (HGP-SL), which can be integrated into various graph neural network architectures. HGP-SL incorporates graph pooling and structure learning into a unified module to generate hierarchical representations of graphs. More specifically, the graph pooling operation adaptively selects a subset of nodes to form an induced subgraph for the subsequent layers. To preserve the integrity of graph's topological information, we further introduce a structure learning mechanism to learn a refined graph structure for the pooled graph at each layer. By combining HGP-SL operator with graph neural networks, we perform graph level representation learning with focus on graph classification task. Experimental results on six widely used benchmarks demonstrate the effectiveness of our proposed model.