亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The paper introduces a novel non-parametric Riemannian regression method using Isometric Riemannian Manifolds (IRMs). The proposed technique, Intrinsic Local Polynomial Regression on IRMs (ILPR-IRMs), enables global data mapping between Riemannian manifolds while preserving underlying geometries. The ILPR method is generalized to handle multivariate covariates on any Riemannian manifold and isometry. Specifically, for manifolds equipped with Euclidean Pullback Metrics (EPMs), a closed analytical formula is derived for the multivariate ILPR (ILPR-EPM). Asymptotic statistical properties of the ILPR-EPM for the multivariate local linear case are established, including a formula for the asymptotic bias, establishing estimator consistency. The paper showcases possible applications of the method by focusing on a group of Riemannian metrics on the Symmetric Positive Definite (SPD) manifold, which arises in machine learning and neuroscience. It is demonstrated that several metrics on the SPD manifold are EPMs, resulting in a closed analytical expression for the multivariate ILPR estimator on the SPD manifold. The paper evaluates the ILPR estimator's performance under two specific EPMs, Log-Cholesky and Log-Euclidean, on simulated data on the SPD manifold and compares it with extrinsic LPR using the Affine-Invariant when scaling the manifold and covariate dimension. The results show that the ILPR using the Log-Cholesky metric is computationally faster and provides a better trade-off between error and time than other metrics. Finally, the Log-Cholesky metric on the SPD manifold is employed to implement an efficient and intrinsic version of Rie-SNE for visualizing high-dimensional SPD data. The code for implementing ILPR-EPMs and other relevant calculations is available on the GitHub page.

相關內容

Automated variable selection is widely applied in statistical model development. Algorithms like forward, backward or stepwise selection are available in statistical software packages like R and SAS. Many researchers have criticized the use of these algorithms because the models resulting from automated selection algorithms are not based on theory and tend to be unstable. Furthermore, simulation studies have shown that they often select incorrect variables due to random effects which makes these model building strategies unreliable. In this article, a comprehensive stepwise selection algorithm tailored to logistic regression is proposed. It uses multiple criteria in variable selection instead of relying on one single measure only, like a $p$-value or Akaike's information criterion, which ensures robustness and soundness of the final outcome. The result of the selection process might not be unambiguous. It might select multiple models that could be considered as statistically equivalent. A simulation study demonstrates the superiority of the proposed variable selection method over available alternatives.

The problem of generalization and transportation of treatment effect estimates from a study sample to a target population is central to empirical research and statistical methodology. In both randomized experiments and observational studies, weighting methods are often used with this objective. Traditional methods construct the weights by separately modeling the treatment assignment and study selection probabilities and then multiplying functions (e.g., inverses) of their estimates. In this work, we provide a justification and an implementation for weighting in a single step. We show a formal connection between this one-step method and inverse probability and inverse odds weighting. We demonstrate that the resulting estimator for the target average treatment effect is consistent, asymptotically Normal, multiply robust, and semiparametrically efficient. We evaluate the performance of the one-step estimator in a simulation study. We illustrate its use in a case study on the effects of physician racial diversity on preventive healthcare utilization among Black men in California. We provide R code implementing the methodology.

Given tensors $\boldsymbol{\mathscr{A}}, \boldsymbol{\mathscr{B}}, \boldsymbol{\mathscr{C}}$ of size $m \times 1 \times n$, $m \times p \times 1$, and $1\times p \times n$, respectively, their Bhattacharya-Mesner (BM) product will result in a third order tensor of dimension $m \times p \times n$ and BM-rank of 1 (Mesner and Bhattacharya, 1990). Thus, if a third-order tensor can be written as a sum of a small number of such BM-rank 1 terms, this BM-decomposition (BMD) offers an implicitly compressed representation of the tensor. Therefore, in this paper, we give a generative model which illustrates that spatio-temporal video data can be expected to have low BM-rank. Then, we discuss non-uniqueness properties of the BMD and give an improved bound on the BM-rank of a third-order tensor. We present and study properties of an iterative algorithm for computing an approximate BMD, including convergence behavior and appropriate choices for starting guesses that allow for the decomposition of our spatial-temporal data into stationary and non-stationary components. Several numerical experiments show the impressive ability of our BMD algorithm to extract important temporal information from video data while simultaneously compressing the data. In particular, we compare our approach with dynamic mode decomposition (DMD): first, we show how the matrix-based DMD can be reinterpreted in tensor BMP form, then we explain why the low BM-rank decomposition can produce results with superior compression properties while simultaneously providing better separation of stationary and non-stationary features in the data. We conclude with a comparison of our low BM-rank decomposition to two other tensor decompositions, CP and the t-SVDM.

This paper presents a novel approach to Bayesian nonparametric spectral analysis of stationary multivariate time series. Starting with a parametric vector-autoregressive model, the parametric likelihood is nonparametrically adjusted in the frequency domain to account for potential deviations from parametric assumptions. We show mutual contiguity of the nonparametrically corrected likelihood, the multivariate Whittle likelihood approximation and the exact likelihood for Gaussian time series. A multivariate extension of the nonparametric Bernstein-Dirichlet process prior for univariate spectral densities to the space of Hermitian positive definite spectral density matrices is specified directly on the correction matrices. An infinite series representation of this prior is then used to develop a Markov chain Monte Carlo algorithm to sample from the posterior distribution. The code is made publicly available for ease of use and reproducibility. With this novel approach we provide a generalization of the multivariate Whittle-likelihood-based method of Meier et al. (2020) as well as an extension of the nonparametrically corrected likelihood for univariate stationary time series of Kirch et al. (2019) to the multivariate case. We demonstrate that the nonparametrically corrected likelihood combines the efficiencies of a parametric with the robustness of a nonparametric model. Its numerical accuracy is illustrated in a comprehensive simulation study. We illustrate its practical advantages by a spectral analysis of two environmental time series data sets: a bivariate time series of the Southern Oscillation Index and fish recruitment and time series of windspeed data at six locations in California.

Multivariate sequential data collected in practice often exhibit temporal irregularities, including nonuniform time intervals and component misalignment. However, if uneven spacing and asynchrony are endogenous characteristics of the data rather than a result of insufficient observation, the information content of these irregularities plays a defining role in characterizing the multivariate dependence structure. Existing approaches for probabilistic forecasting either overlook the resulting statistical heterogeneities, are susceptible to imputation biases, or impose parametric assumptions on the data distribution. This paper proposes an end-to-end solution that overcomes these limitations by allowing the observation arrival times to play the central role of model construction, which is at the core of temporal irregularities. To acknowledge temporal irregularities, we first enable unique hidden states for components so that the arrival times can dictate when, how, and which hidden states to update. We then develop a conditional flow representation to non-parametrically represent the data distribution, which is typically non-Gaussian, and supervise this representation by carefully factorizing the log-likelihood objective to select conditional information that facilitates capturing time variation and path dependency. The broad applicability and superiority of the proposed solution are confirmed by comparing it with existing approaches through ablation studies and testing on real-world datasets.

Since optimization on Riemannian manifolds relies on the chosen metric, it is appealing to know that how the performance of a Riemannian optimization method varies with different metrics and how to exquisitely construct a metric such that a method can be accelerated. To this end, we propose a general framework for optimization problems on product manifolds where the search space is endowed with a preconditioned metric, and we develop the Riemannian gradient descent and Riemannian conjugate gradient methods under this metric. Specifically, the metric is constructed by an operator that aims to approximate the diagonal blocks of the Riemannian Hessian of the cost function, which has a preconditioning effect. We explain the relationship between the proposed methods and the variable metric methods, and show that various existing methods, e.g., the Riemannian Gauss--Newton method, can be interpreted by the proposed framework with specific metrics. In addition, we tailor new preconditioned metrics and adapt the proposed Riemannian methods to the canonical correlation analysis and the truncated singular value decomposition problems, and we propose the Gauss--Newton method to solve the tensor ring completion problem. Numerical results among these applications verify that a delicate metric does accelerate the Riemannian optimization methods.

There are well-established methods for identifying the causal effect of a time-varying treatment applied at discrete time points. However, in the real world, many treatments are continuous or have a finer time scale than the one used for measurement or analysis. While researchers have investigated the discrepancies between estimates under varying discretization scales using simulations and empirical data, it is still unclear how the choice of discretization scale affects causal inference. To address this gap, we present a framework to understand how discretization scales impact the properties of causal inferences about the effect of a time-varying treatment. We introduce the concept of "identification bias", which is the difference between the causal estimand for a continuous-time treatment and the purported estimand of a discretized version of the treatment. We show that this bias can persist even with an infinite number of longitudinal treatment-outcome trajectories. We specifically examine the identification problem in a class of linear stochastic continuous-time data-generating processes and demonstrate the identification bias of the g-formula in this context. Our findings indicate that discretization bias can significantly impact empirical analysis, especially when there are limited repeated measurements. Therefore, we recommend that researchers carefully consider the choice of discretization scale and perform sensitivity analysis to address this bias. We also propose a simple and heuristic quantitative measure for sensitivity concerning discretization and suggest that researchers report this measure along with point and interval estimates in their work. By doing so, researchers can better understand and address the potential impact of discretization bias on causal inference.

Collections of probability distributions arise in a variety of applications ranging from user activity pattern analysis to brain connectomics. In practice these distributions can be defined over diverse domain types including finite intervals, circles, cylinders, spheres, other manifolds, and graphs. This paper introduces an approach for detecting differences between two collections of distributions over such general domains. To this end, we propose the intrinsic slicing construction that yields a novel class of Wasserstein distances on manifolds and graphs. These distances are Hilbert embeddable, allowing us to reduce the distribution collection comparison problem to a more familiar mean testing problem in a Hilbert space. We provide two testing procedures one based on resampling and another on combining p-values from coordinate-wise tests. Our experiments in various synthetic and real data settings show that the resulting tests are powerful and the p-values are well-calibrated.

This PhD thesis contains several contributions to the field of statistical causal modeling. Statistical causal models are statistical models embedded with causal assumptions that allow for the inference and reasoning about the behavior of stochastic systems affected by external manipulation (interventions). This thesis contributes to the research areas concerning the estimation of causal effects, causal structure learning, and distributionally robust (out-of-distribution generalizing) prediction methods. We present novel and consistent linear and non-linear causal effects estimators in instrumental variable settings that employ data-dependent mean squared prediction error regularization. Our proposed estimators show, in certain settings, mean squared error improvements compared to both canonical and state-of-the-art estimators. We show that recent research on distributionally robust prediction methods has connections to well-studied estimators from econometrics. This connection leads us to prove that general K-class estimators possess distributional robustness properties. We, furthermore, propose a general framework for distributional robustness with respect to intervention-induced distributions. In this framework, we derive sufficient conditions for the identifiability of distributionally robust prediction methods and present impossibility results that show the necessity of several of these conditions. We present a new structure learning method applicable in additive noise models with directed trees as causal graphs. We prove consistency in a vanishing identifiability setup and provide a method for testing substructure hypotheses with asymptotic family-wise error control that remains valid post-selection. Finally, we present heuristic ideas for learning summary graphs of nonlinear time-series models.

The inductive biases of graph representation learning algorithms are often encoded in the background geometry of their embedding space. In this paper, we show that general directed graphs can be effectively represented by an embedding model that combines three components: a pseudo-Riemannian metric structure, a non-trivial global topology, and a unique likelihood function that explicitly incorporates a preferred direction in embedding space. We demonstrate the representational capabilities of this method by applying it to the task of link prediction on a series of synthetic and real directed graphs from natural language applications and biology. In particular, we show that low-dimensional cylindrical Minkowski and anti-de Sitter spacetimes can produce equal or better graph representations than curved Riemannian manifolds of higher dimensions.

北京阿比特科技有限公司