亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Multivariate functional data can be intrinsically multivariate like movement trajectories in 2D or complementary like precipitation, temperature, and wind speeds over time at a given weather station. We propose a multivariate functional additive mixed model (multiFAMM) and show its application to both data situations using examples from sports science (movement trajectories of snooker players) and phonetic science (acoustic signals and articulation of consonants). The approach includes linear and nonlinear covariate effects and models the dependency structure between the dimensions of the responses using multivariate functional principal component analysis. Multivariate functional random intercepts capture both the auto-correlation within a given function and cross-correlations between the multivariate functional dimensions. They also allow us to model between-function correlations as induced by e.g.\ repeated measurements or crossed study designs. Modeling the dependency structure between the dimensions can generate additional insight into the properties of the multivariate functional process, improves the estimation of random effects, and yields corrected confidence bands for covariate effects. Extensive simulation studies indicate that a multivariate modeling approach is more parsimonious than fitting independent univariate models to the data while maintaining or improving model fit.

相關內容

We show that the nonstandard limiting distribution of HAR test statistics under fixed-b asymptotics is not pivotal (even after studentization) when the data are nonstationarity. It takes the form of a complicated function of Gaussian processes and depends on the integrated local long-run variance and on on the second moments of the relevant series (e.g., of the regressors and errors for the case of the linear regression model). Hence, existing fixed-b inference methods based on stationarity are not theoretically valid in general. The nuisance parameters entering the fixed-b limiting distribution can be consistently estimated under small-b asymptotics but only with nonparametric rate of convergence. Hence, We show that the error in rejection probability (ERP) is an order of magnitude larger than that under stationarity and is also larger than that of HAR tests based on HAC estimators under conventional asymptotics. These theoretical results reconcile with recent finite-sample evidence in Casini (2021) and Casini, Deng and Perron (2021) who showing that fixed-b HAR tests can perform poorly when the data are nonstationary. They can be conservative under the null hypothesis and have non-monotonic power under the alternative hypothesis irrespective of how large the sample size is.

We investigate a clustering problem with data from a mixture of Gaussians that share a common but unknown, and potentially ill-conditioned, covariance matrix. We start by considering Gaussian mixtures with two equally-sized components and derive a Max-Cut integer program based on maximum likelihood estimation. We prove its solutions achieve the optimal misclassification rate when the number of samples grows linearly in the dimension, up to a logarithmic factor. However, solving the Max-cut problem appears to be computationally intractable. To overcome this, we develop an efficient spectral algorithm that attains the optimal rate but requires a quadratic sample size. Although this sample complexity is worse than that of the Max-cut problem, we conjecture that no polynomial-time method can perform better. Furthermore, we gather numerical and theoretical evidence that supports the existence of a statistical-computational gap. Finally, we generalize the Max-Cut program to a $k$-means program that handles multi-component mixtures with possibly unequal weights. It enjoys similar optimality guarantees for mixtures of distributions that satisfy a transportation-cost inequality, encompassing Gaussian and strongly log-concave distributions.

Longevity and safety of lithium-ion batteries are facilitated by efficient monitoring and adjustment of the battery operating conditions. Hence, it is crucial to implement fast and accurate algorithms for State of Health (SoH) monitoring on the Battery Management System. The task is challenging due to the complexity and multitude of the factors contributing to the battery degradation, especially because the different degradation processes occur at various timescales and their interactions play an important role. Data-driven methods bypass this issue by approximating the complex processes with statistical or machine learning models. This paper proposes a data-driven approach which is understudied in the context of battery degradation, despite its simplicity and ease of computation: the Multivariable Fractional Polynomial (MFP) regression. Models are trained from historical data of one exhausted cell and used to predict the SoH of other cells. The data are characterised by varying loads simulating dynamic operating conditions. Two hypothetical scenarios are considered: one assumes that a recent capacity measurement is known, the other is based only on the nominal capacity. It was shown that the degradation behaviour of the batteries under examination is influenced by their historical data, as supported by the low prediction errors achieved (root mean squared errors from 1.2% to 7.22% when considering data up to the battery End of Life). Moreover, we offer a multi-factor perspective where the degree of impact of each different factor is analysed. Finally, we compare with a Long Short-Term Memory Neural Network and other works from the literature on the same dataset. We conclude that the MFP regression is effective and competitive with contemporary works, and provides several additional advantages e.g. in terms of interpretability, generalisability, and implementability.

Mardia's measures of multivariate skewness and kurtosis summarize the respective characteristics of a multivariate distribution with two numbers. However, these measures do not reflect the sub-dimensional features of the distribution. Consequently, testing procedures based on these measures may fail to detect skewness or kurtosis present in a sub-dimension of the multivariate distribution. We introduce sub-dimensional Mardia measures of multivariate skewness and kurtosis, and investigate the information they convey about all sub-dimensional distributions of some symmetric and skewed families of multivariate distributions. The maxima of the sub-dimensional Mardia measures of multivariate skewness and kurtosis are considered, as these reflect the maximum skewness and kurtosis present in the distribution, and also allow us to identify the sub-dimension bearing the highest skewness and kurtosis. Asymptotic distributions of the vectors of sub-dimensional Mardia measures of multivariate skewness and kurtosis are derived, based on which testing procedures for the presence of skewness and of deviation from Gaussian kurtosis are developed. The performances of these tests are compared with some existing tests in the literature on simulated and real datasets.

Weak $\omega$-categories are notoriously difficult to define because of the very intricate nature of their axioms. Various approaches have been explored, based on different shapes given to the cells. Interestingly, homotopy type theory encompasses a definition of weak $\omega$-groupoid in a globular setting, since every type carries such a structure. Starting from this remark, Brunerie could extract this definition of globular weak $\omega$\nobreakdash-groupoids, formulated as a type theory. By refining its rules, Finster and Mimram have then defined a type theory called CaTT, whose models are weak $\omega$-categories. Here, we generalize this approach to monoidal weak $\omega$-categories. Based on the principle that they should be equivalent to weak $\omega$-categories with only one $0$-cell, we are able to derive a type theory MCaTT whose models are monoidal categories. This requires changing the rules of the theory in order to encode the information carried by the unique $0$-cell. The correctness of the resulting type theory is shown by defining a pair of translations between our type theory MCaTT and the type theory CaTT. Our main contribution is to show that these translations relate the models of our type theory to the models of the type theory CaTT consisting of $\omega$-categories with only one $0$-cell, by analyzing in details how the notion of models interact with the structural rules of both type theories.

Response functions linking regression predictors to properties of the response distribution are fundamental components in many statistical models. However, the choice of these functions is typically based on the domain of the modeled quantities and is not further scrutinized. For example, the exponential response function is usually assumed for parameters restricted to be positive although it implies a multiplicative model which may not necessarily be desired. Consequently, applied researchers might easily face misleading results when relying on defaults without further investigation. As an alternative to the exponential response function, we propose the use of the softplus function to construct alternative link functions for parameters restricted to be positive. As a major advantage, we can construct differentiable link functions corresponding closely to the identity function for positive values of the regression predictor, which implies an quasi-additive model and thus allows for an additive interpretation of the estimated effects by practitioners. We demonstrate the applicability of the softplus response function using both simulations and real data. In four applications featuring count data regression and Bayesian distributional regression, we contrast our approach to the commonly used exponential response function.

We present the near-Maximal Algorithm for Poisson-disk Sampling (nMAPS) to generate point distributions for variable resolution Delaunay triangular and tetrahedral meshes in two and three-dimensions, respectively. nMAPS consists of two principal stages. In the first stage, an initial point distribution is produced using a cell-based rejection algorithm. In the second stage, holes in the sample are detected using an efficient background grid and filled in to obtain a near-maximal covering. Extensive testing shows that nMAPS generates a variable resolution mesh in linear run time with the number of accepted points. We demonstrate nMAPS capabilities by meshing three-dimensional discrete fracture networks (DFN) and the surrounding volume. The discretized boundaries of the fractures, which are represented as planar polygons, are used as the seed of 2D-nMAPS to produce a conforming Delaunay triangulation. The combined mesh of the DFN is used as the seed for 3D-nMAPS, which produces conforming Delaunay tetrahedra surrounding the network. Under a set of conditions that naturally arise in maximal Poisson-disk samples and are satisfied by nMAPS, the two-dimensional Delaunay triangulations are guaranteed to only have well-behaved triangular faces. While nMAPS does not provide triangulation quality bounds in more than two dimensions, we found that low-quality tetrahedra in 3D are infrequent, can be readily detected and removed, and a high quality balanced mesh is produced.

Structural equation models are commonly used to capture the relationship between sets of observed and unobservable variables. Traditionally these models are fitted using frequentist approaches but recently researchers and practitioners have developed increasing interest in Bayesian inference. In Bayesian settings, inference for these models is typically performed via Markov chain Monte Carlo methods, which may be computationally intensive for models with a large number of manifest variables or complex structures. Variational approximations can be a fast alternative; however, they have not been adequately explored for this class of models. We develop a mean field variational Bayes approach for fitting elemental structural equation models and demonstrate how bootstrap can considerably improve the variational approximation quality. We show that this variational approximation method can provide reliable inference while being significantly faster than Markov chain Monte Carlo.

Modeling multivariate time series has long been a subject that has attracted researchers from a diverse range of fields including economics, finance, and traffic. A basic assumption behind multivariate time series forecasting is that its variables depend on one another but, upon looking closely, it is fair to say that existing methods fail to fully exploit latent spatial dependencies between pairs of variables. In recent years, meanwhile, graph neural networks (GNNs) have shown high capability in handling relational dependencies. GNNs require well-defined graph structures for information propagation which means they cannot be applied directly for multivariate time series where the dependencies are not known in advance. In this paper, we propose a general graph neural network framework designed specifically for multivariate time series data. Our approach automatically extracts the uni-directed relations among variables through a graph learning module, into which external knowledge like variable attributes can be easily integrated. A novel mix-hop propagation layer and a dilated inception layer are further proposed to capture the spatial and temporal dependencies within the time series. The graph learning, graph convolution, and temporal convolution modules are jointly learned in an end-to-end framework. Experimental results show that our proposed model outperforms the state-of-the-art baseline methods on 3 of 4 benchmark datasets and achieves on-par performance with other approaches on two traffic datasets which provide extra structural information.

Long Short-Term Memory (LSTM) infers the long term dependency through a cell state maintained by the input and the forget gate structures, which models a gate output as a value in [0,1] through a sigmoid function. However, due to the graduality of the sigmoid function, the sigmoid gate is not flexible in representing multi-modality or skewness. Besides, the previous models lack modeling on the correlation between the gates, which would be a new method to adopt inductive bias for a relationship between previous and current input. This paper proposes a new gate structure with the bivariate Beta distribution. The proposed gate structure enables probabilistic modeling on the gates within the LSTM cell so that the modelers can customize the cell state flow with priors and distributions. Moreover, we theoretically show the higher upper bound of the gradient compared to the sigmoid function, and we empirically observed that the bivariate Beta distribution gate structure provides higher gradient values in training. We demonstrate the effectiveness of bivariate Beta gate structure on the sentence classification, image classification, polyphonic music modeling, and image caption generation.

北京阿比特科技有限公司