亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Differential Granger causality, that is understanding how Granger causal relations differ between two related time series, is of interest in many scientific applications. Modeling each time series by a vector autoregressive (VAR) model, we propose a new method to directly learn the difference between the corresponding transition matrices in high dimensions. Key to the new method is an estimating equation constructed based on the Yule-Walker equation that links the difference in transition matrices to the difference in the corresponding precision matrices. In contrast to separately estimating each transition matrix and then calculating the difference, the proposed direct estimation method only requires sparsity of the difference of the two VAR models, and hence allows hub nodes in each high-dimensional time series. The direct estimator is shown to be consistent in estimation and support recovery under mild assumptions. These results also lead to novel consistency results with potentially faster convergence rates for estimating differences between precision matrices of i.i.d observations under weaker assumptions than existing results. We evaluate the finite sample performance of the proposed method using simulation studies and an application to electroencephalogram (EEG) data.

相關內容

In this paper we propose a tool for high-dimensional approximation based on trigonometric polynomials where we allow only low-dimensional interactions of variables. In a general high-dimensional setting, it is already possible to deal with special sampling sets such as sparse grids or rank-1 lattices. This requires black-box access to the function, i.e., the ability to evaluate it at any point. Here, we focus on scattered data points and grouped frequency index sets along the dimensions. From there we propose a fast matrix-vector multiplication, the grouped Fourier transform, for high-dimensional grouped index sets. Those transformations can be used in the application of the previously introduced method of approximating functions with low superposition dimension based on the analysis of variance (ANOVA) decomposition where there is a one-to-one correspondence from the ANOVA terms to our proposed groups. The method is able to dynamically detected important sets of ANOVA terms in the approximation. In this paper, we consider the involved least-squares problem and add different forms of regularization: Classical Tikhonov-regularization, namely, regularized least squares and the technique of group lasso, which promotes sparsity in the groups. As for the latter, there are no explicit solution formulas which is why we applied the fast iterative shrinking-thresholding algorithm to obtain the minimizer. Moreover, we discuss the possibility of incorporating smoothness information into the least-squares problem. Numerical experiments in under-, overdetermined, and noisy settings indicate the applicability of our algorithms. While we consider periodic functions, the idea can be directly generalized to non-periodic functions as well.

This paper is concerned with superconvergence properties of the direct discontinuous Galerkin (DDG) method for two-dimensional nonlinear convection-diffusion equations. By using the idea of correction function, we prove that, for any piecewise tensor-product polynomials of degree $k\geq 2$, the DDG solution is superconvergent at nodes and Lobatto points, with an order of ${\cal O}(h^{2k})$ and ${\cal O}(h^{k+2})$, respectively. Moreover, superconvergence properties for the derivative approximation are also studied and the superconvergence points are identified at Gauss points, with an order of ${\cal O}(h^{k+1})$. Numerical experiments are presented to confirm the sharpness of all the theoretical findings.

We propose a doubly robust approach to characterizing treatment effect heterogeneity in observational studies. We utilize posterior distributions for both the propensity score and outcome regression models to provide valid inference on the conditional average treatment effect even when high-dimensional or nonparametric models are used. We show that our approach leads to conservative inference in finite samples or under model misspecification, and provides a consistent variance estimator when both models are correctly specified. In simulations, we illustrate the utility of these results in difficult settings such as high-dimensional covariate spaces or highly flexible models for the propensity score and outcome regression. Lastly, we analyze environmental exposure data from NHANES to identify how the effects of these exposures vary by subject-level characteristics.

Measurement error is a pervasive issue which renders the results of an analysis unreliable. The measurement error literature contains numerous correction techniques, which can be broadly divided into those which aim to produce exactly consistent estimators, and those which are only approximately consistent. While consistency is a desirable property, it is typically attained only under specific model assumptions. Two techniques, regression calibration and simulation extrapolation, are used frequently in a wide variety of parametric and semiparametric settings. However, in many settings these methods are only approximately consistent. We generalize these corrections, relaxing assumptions placed on replicate measurements. Under regularity conditions, the estimators are shown to be asymptotically normal, with a sandwich estimator for the asymptotic variance. Through simulation, we demonstrate the improved performance of the modified estimators, over the standard techniques, when these assumptions are violated. We motivate these corrections using the Framingham Heart Study, and apply the generalized techniques to an analysis of these data.

We develop machinery to design efficiently computable and consistent estimators, achieving estimation error approaching zero as the number of observations grows, when facing an oblivious adversary that may corrupt responses in all but an $\alpha$ fraction of the samples. As concrete examples, we investigate two problems: sparse regression and principal component analysis (PCA). For sparse regression, we achieve consistency for optimal sample size $n\gtrsim (k\log d)/\alpha^2$ and optimal error rate $O(\sqrt{(k\log d)/(n\cdot \alpha^2)})$ where $n$ is the number of observations, $d$ is the number of dimensions and $k$ is the sparsity of the parameter vector, allowing the fraction of inliers to be inverse-polynomial in the number of samples. Prior to this work, no estimator was known to be consistent when the fraction of inliers $\alpha$ is $o(1/\log \log n)$, even for (non-spherical) Gaussian design matrices. Results holding under weak design assumptions and in the presence of such general noise have only been shown in dense setting (i.e., general linear regression) very recently by d'Orsi et al. [dNS21]. In the context of PCA, we attain optimal error guarantees under broad spikiness assumptions on the parameter matrix (usually used in matrix completion). Previous works could obtain non-trivial guarantees only under the assumptions that the measurement noise corresponding to the inliers is polynomially small in $n$ (e.g., Gaussian with variance $1/n^2$). To devise our estimators, we equip the Huber loss with non-smooth regularizers such as the $\ell_1$ norm or the nuclear norm, and extend d'Orsi et al.'s approach [dNS21] in a novel way to analyze the loss function. Our machinery appears to be easily applicable to a wide range of estimation problems.

We study the least square estimator, in the framework of simple linear regression, when the deviance term $\varepsilon$ with respect to the linear model is modeled by a uniform distribution. In particular, we give the law of this estimator, and prove some convergence properties.

With the increasing complexity of the production process, the diversity of data types contributes to the urgency of developing the network monitoring technology. This paper mainly focuses on an online algorithm to detect the serially correlated directed network robustly and sensitively. Firstly, a transition probability matrix is considered here to overcome the double correlation of primary data. Furthermore, since the sum of each row of the transition probability matrix is one, it also standardizes data that facilitates subsequent modeling. Then we extend the spring-length-based method to the multivariate case and propose an adaptive cumulative sum (CUSUM) control chart on the strength of a weighted statistic to monitor directed networks. The novel approach only assumes that the process observation is associated with nearby points without any parametric time series models, which should be coincided with the fact. Simulation results and a real example of metro transportation demonstrate the superiority of our design.

We consider the problem of discovering $K$ related Gaussian directed acyclic graphs (DAGs), where the involved graph structures share a consistent causal order and sparse unions of supports. Under the multi-task learning setting, we propose a $l_1/l_2$-regularized maximum likelihood estimator (MLE) for learning $K$ linear structural equation models. We theoretically show that the joint estimator, by leveraging data across related tasks, can achieve a better sample complexity for recovering the causal order (or topological order) than separate estimations. Moreover, the joint estimator is able to recover non-identifiable DAGs, by estimating them together with some identifiable DAGs. Lastly, our analysis also shows the consistency of union support recovery of the structures. To allow practical implementation, we design a continuous optimization problem whose optimizer is the same as the joint estimator and can be approximated efficiently by an iterative algorithm. We validate the theoretical analysis and the effectiveness of the joint estimator in experiments.

This paper focuses on the expected difference in borrower's repayment when there is a change in the lender's credit decisions. Classical estimators overlook the confounding effects and hence the estimation error can be magnificent. As such, we propose another approach to construct the estimators such that the error can be greatly reduced. The proposed estimators are shown to be unbiased, consistent, and robust through a combination of theoretical analysis and numerical testing. Moreover, we compare the power of estimating the causal quantities between the classical estimators and the proposed estimators. The comparison is tested across a wide range of models, including linear regression models, tree-based models, and neural network-based models, under different simulated datasets that exhibit different levels of causality, different degrees of nonlinearity, and different distributional properties. Most importantly, we apply our approaches to a large observational dataset provided by a global technology firm that operates in both the e-commerce and the lending business. We find that the relative reduction of estimation error is strikingly substantial if the causal effects are accounted for correctly.

Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly. We develop a simple method for estimating parameters in implicit models that does not require knowledge of the form of the likelihood function or any derived quantities, but can be shown to be equivalent to maximizing likelihood under some conditions. Our result holds in the non-asymptotic parametric setting, where both the capacity of the model and the number of data examples are finite. We also demonstrate encouraging experimental results.

北京阿比特科技有限公司