亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In functional data analysis, functional linear regression has attracted significant attention recently. Herein, we consider the case where both the response and covariates are functions. There are two available approaches for addressing such a situation: concurrent and nonconcurrent functional models. In the former, the value of the functional response at a given domain point depends only on the value of the functional regressors evaluated at the same domain point, whereas, in the latter, the functional covariates evaluated at each point of their domain have a non-null effect on the response at any point of its domain. To balance these two extremes, we propose a locally sparse functional regression model in which the functional regression coefficient is allowed (but not forced) to be exactly zero for a subset of its domain. This is achieved using a suitable basis representation of the functional regression coefficient and exploiting an overlapping group-Lasso penalty for its estimation. We introduce efficient computational strategies based on majorization-minimization algorithms and discuss appealing theoretical properties regarding the model support and consistency of the proposed estimator. We further illustrate the empirical performance of the method through simulations and two applications related to human mortality and bidding the energy market.

相關內容

Quantile regression is a field with steadily growing importance in statistical modeling. It is a complementary method to linear regression, since computing a range of conditional quantile functions provides a more accurate modelling of the stochastic relationship among variables, especially in the tails. We introduce a non-restrictive and highly flexible nonparametric quantile regression approach based on C- and D-vine copulas. Vine copulas allow for separate modeling of marginal distributions and the dependence structure in the data, and can be expressed through a graph theoretical model given by a sequence of trees. This way we obtain a quantile regression model, that overcomes typical issues of quantile regression such as quantile crossings or collinearity, the need for transformations and interactions of variables. Our approach incorporates a two-step ahead ordering of variables, by maximizing the conditional log-likelihood of the tree sequence, while taking into account the next two tree levels. Further, we show that the nonparametric conditional quantile estimator is consistent. The performance of the proposed methods is evaluated in both low- and high-dimensional settings using simulated and real world data. The results support the superior prediction ability of the proposed models.

There has been a rich development of vector autoregressive (VAR) models for modeling temporally correlated multivariate outcomes. However, the existing VAR literature has largely focused on single subject parametric analysis, with some recent extensions to multi-subject modeling with known subgroups. Motivated by the need for flexible Bayesian methods that can pool information across heterogeneous samples in an unsupervised manner, we develop a novel class of non-parametric Bayesian VAR models based on heterogeneous multi-subject data. In particular, we propose a product of Dirichlet process mixture priors that enables separate clustering at multiple scales, which result in partially overlapping clusters that provide greater flexibility. We develop several variants of the method to cater to varying levels of heterogeneity. We implement an efficient posterior computation scheme and illustrate posterior consistency properties under reasonable assumptions on the true density. Extensive numerical studies show distinct advantages over competing methods in terms of estimating model parameters and identifying the true clustering and sparsity structures. Our analysis of resting state fMRI data from the Human Connectome Project reveals biologically interpretable differences between distinct fluid intelligence groups, and reproducible parameter estimates. In contrast, single-subject VAR analyses followed by permutation testing result in negligible differences, which is biologically implausible.

We consider a parametric modelling approach for survival data where covariates are allowed to enter the model through multiple distributional parameters, i.e., scale and shape. This is in contrast with the standard convention of having a single covariate-dependent parameter, typically the scale. Taking what is referred to as a multi-parameter regression (MPR) approach to modelling has been shown to produce flexible and robust models with relatively low model complexity cost. However, it is very common to have clustered data arising from survival analysis studies, and this is something that is under developed in the MPR context. The purpose of this article is to extend MPR models to handle multivariate survival data by introducing random effects in both the scale and the shape regression components. We consider a variety of possible dependence structures for these random effects (independent, shared, and correlated), and estimation proceeds using a h-likelihood approach. The performance of our estimation procedure is investigated by a way of an extensive simulation study, and the merits of our modelling approach are illustrated through applications to two real data examples, a lung cancer dataset and a bladder cancer dataset.

Multi-criteria decision-making often requires finding a small representative subset from the database. A recently proposed method is the regret minimization set (RMS) query. RMS returns a fixed size subset S of dataset D that minimizes the regret ratio of S (the difference between the score of top1 in S and the score of top-1 in D, for any possible utility function). Existing work showed that the regret-ratio is not able to accurately quantify the regret level of a user. Further, relative to the regret-ratio, users do understand the notion of rank. Consequently, it considered the problem of finding a minimal set S with at most k rank-regret (the minimal rank of tuples of S in the sorted list of D). Corresponding to RMS, we focus on the dual version of the above problem, defined as the rank-regret minimization (RRM) problem, which seeks to find a fixed size set S that minimizes the maximum rank-regret for all possible utility functions. Further, we generalize RRM and propose the restricted rank-regret minimization (RRRM) problem to minimize the rank-regret of S for functions in a restricted space. The solution for RRRM usually has a lower regret level and can better serve the specific preferences of some users. In 2D space, we design a dynamic programming algorithm 2DRRM to find the optimal solution for RRM. In HD space, we propose an algorithm HDRRM for RRM that bounds the output size and introduces a double approximation guarantee for rank-regret. Both 2DRRM and HDRRM can be generalized to the RRRM problem. Extensive experiments are performed on the synthetic and real datasets to verify the efficiency and effectiveness of our algorithms.

This article is motivated by studying multisensory effects on brain activities in intracranial electroencephalography (iEEG) experiments. Differential brain activities to multisensory stimulus presentations are zero in most regions and non-zero in some local regions, yielding locally sparse functions. Such studies are essentially a function-on-scalar regression problem, with interest being focused not only on estimating nonparametric functions but also on recovering the function supports. We propose a weighted group bridge approach for simultaneous function estimation and support recovery in function-on-scalar mixed effect models, while accounting for heterogeneity present in functional data. We use B-splines to transform sparsity of functions to its sparse vector counterpart of increasing dimension, and propose a fast non-convex optimization algorithm using nested alternative direction method of multipliers (ADMM) for estimation. Large sample properties are established. In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the $L_2$ norm and resemble a phase transition phenomenon. For support estimation, we derive a convergence rate under the $L_{\infty}$ norm that leads to a sparsistency property under $\delta$-sparsity, and provide a simple sufficient regularity condition under which a strict sparsistency property is established. An adjusted extended Bayesian information criterion is proposed for parameter tuning. The developed method is illustrated through simulation and an application to a novel iEEG dataset to study multisensory integration. We integrate the proposed method into RAVE, an R package that gains increasing popularity in the iEEG community.

We consider the problem of inferring the conditional independence graph (CIG) of a sparse, high-dimensional stationary multivariate Gaussian time series. A sparse-group lasso-based frequency-domain formulation of the problem based on frequency-domain sufficient statistic for the observed time series is presented. We investigate an alternating direction method of multipliers (ADMM) approach for optimization of the sparse-group lasso penalized log-likelihood. We provide sufficient conditions for convergence in the Frobenius norm of the inverse PSD estimators to the true value, jointly across all frequencies, where the number of frequencies are allowed to increase with sample size. This results also yields a rate of convergence. We also empirically investigate selection of the tuning parameters based on Bayesian information criterion, and illustrate our approach using numerical examples utilizing both synthetic and real data.

This paper studies a basic notion of distributional shape known as orthounimodality (OU) and its use in shape-constrained distributionally robust optimization (DRO). As a key motivation, we argue how such type of DRO is well-suited to tackle multivariate extreme event estimation by giving statistically valid confidence bounds on target extremal probabilities. In particular, we explain how DRO can be used as a nonparametric alternative to conventional extreme value theory that extrapolates tails based on theoretical limiting distributions, which could face challenges in bias-variance control and other technical complications. We also explain how OU resolves the challenges in interpretability and robustness faced by existing distributional shape notions used in the DRO literature. Methodologically, we characterize the extreme points of the OU distribution class in terms of what we call OU sets and build a corresponding Choquet representation, which subsequently allows us to reduce OU-DRO into moment problems over infinite-dimensional random variables. We then develop, in the bivariate setting, a geometric approach to reduce such moment problems into finite dimension via a specially constructed variational problem designed to eliminate suboptimal solutions. Numerical results illustrate how our approach gives rise to valid and competitive confidence bounds for extremal probabilities.

Spatial dependent data frequently occur in many fields such as spatial econometrics and epidemiology. To deal with the dependence of variables and estimate quantile-specific effects by covariates, spatial quantile autoregressive models (SQAR models) are introduced. Conventional quantile regression only focuses on the fitting models but ignores the examination of multiple conditional quantile functions, which provides a comprehensive view of the relationship between the response and covariates. Thus, it is necessary to study the different regression slopes at different quantiles, especially in situations where the quantile coefficients share some common feature. However, traditional Wald multiple tests not only increase the burden of computation but also bring greater FDR. In this paper, we transform the estimation and examination problem into a penalization problem, which estimates the parameters at different quantiles and identifies the interquantile commonality at the same time. To avoid the endogeneity caused by the spatial lag variables in SQAR models, we also introduce instrumental variables before estimation and propose two-stage estimation methods based on fused adaptive LASSO and fused adaptive sup-norm penalty approaches. The oracle properties of the proposed estimation methods are established. Through numerical investigations, it is demonstrated that the proposed methods lead to higher estimation efficiency than the traditional quantile regression.

The autoregressive (AR) models are used to represent the time-varying random process in which output depends linearly on previous terms and a stochastic term (the innovation). In the classical version, the AR models are based on normal distribution. However, this distribution does not allow describing data with outliers and asymmetric behavior. In this paper, we study the AR models with normal inverse Gaussian (NIG) innovations. The NIG distribution belongs to the class of semi heavy-tailed distributions with wide range of shapes and thus allows for describing real-life data with possible jumps. The expectation-maximization (EM) algorithm is used to estimate the parameters of the considered model. The efficacy of the estimation procedure is shown on the simulated data. A comparative study is presented, where the classical estimation algorithms are also incorporated, namely, Yule-Walker and conditional least squares methods along with EM method for model parameters estimation. The applications of the introduced model are demonstrated on the real-life financial data.

We study sparse linear regression over a network of agents, modeled as an undirected graph (with no centralized node). The estimation problem is formulated as the minimization of the sum of the local LASSO loss functions plus a quadratic penalty of the consensus constraint -- the latter being instrumental to obtain distributed solution methods. While penalty-based consensus methods have been extensively studied in the optimization literature, their statistical and computational guarantees in the high dimensional setting remain unclear. This work provides an answer to this open problem. Our contribution is two-fold. First, we establish statistical consistency of the estimator: under a suitable choice of the penalty parameter, the optimal solution of the penalized problem achieves near optimal minimax rate $\mathcal{O}(s \log d/N)$ in $\ell_2$-loss, where $s$ is the sparsity value, $d$ is the ambient dimension, and $N$ is the total sample size in the network -- this matches centralized sample rates. Second, we show that the proximal-gradient algorithm applied to the penalized problem, which naturally leads to distributed implementations, converges linearly up to a tolerance of the order of the centralized statistical error -- the rate scales as $\mathcal{O}(d)$, revealing an unavoidable speed-accuracy dilemma.Numerical results demonstrate the tightness of the derived sample rate and convergence rate scalings.

北京阿比特科技有限公司