亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose FNETS, a methodology for network estimation and forecasting of high-dimensional time series exhibiting strong serial- and cross-sectional correlations. We operate under a factor-adjusted vector autoregressive (VAR) model which, after accounting for pervasive co-movements of the variables by {\it common} factors, models the remaining {\it idiosyncratic} dynamic dependence between the variables as a sparse VAR process. Network estimation of FNETS consists of three steps: (i) factor-adjustment via dynamic principal component analysis, (ii) estimation of the latent VAR process via $\ell_1$-regularised Yule-Walker estimator, and (iii) estimation of partial correlation and long-run partial correlation matrices. In doing so, we learn three networks underpinning the VAR process, namely a directed network representing the Granger causal linkages between the variables, an undirected one embedding their contemporaneous relationships and finally, an undirected network that summarises both lead-lag and contemporaneous linkages. In addition, FNETS provides a suite of methods for forecasting the factor-driven and the idiosyncratic VAR processes. Under general conditions permitting tails heavier than the Gaussian one, we derive uniform consistency rates for the estimators in both network estimation and forecasting, which hold as the dimension of the panel and the sample size diverge. Simulation studies and real data application confirm the good performance of FNETS.

相關內容

Advances in compact sensing devices mounted on satellites have facilitated the collection of large spatio-temporal datasets with coordinates. Since such datasets are often incomplete and noisy, it is useful to create the prediction surface of a spatial field. To this end, we consider an online filtering inference by using the Kalman filter based on linear Gaussian state-space models. However, the Kalman filter is impractically time-consuming when the number of locations in spatio-temporal datasets is large. To address this problem, we propose a multi-resolution filter via linear projection (MRF-lp), a fast computation method for online filtering inference. In the MRF-lp, by carrying out a multi-resolution approximation via linear projection (MRA-lp), the forecast covariance matrix can be approximated while capturing both the large- and small-scale spatial variations. As a result of this approximation, our proposed MRF-lp preserves a block-sparse structure of some matrices appearing in the MRF-lp through time, which leads to the scalability of this algorithm. Additionally, we discuss extensions of the MRF-lp to a nonlinear and non-Gaussian case. Simulation studies and real data analysis for total precipitable water vapor demonstrate that our proposed approach performs well compared with the related methods.

There is a constant need for high-performing and computationally efficient neural network models for image super-resolution: computationally efficient models can be used via low-capacity devices and reduce carbon footprints. One way to obtain such models is to compress models, e.g. quantization. Another way is a neural architecture search that automatically discovers new, more efficient solutions. We propose a novel quantization-aware procedure, the QuantNAS that combines pros of these two approaches. To make QuantNAS work, the procedure looks for quantization-friendly super-resolution models. The approach utilizes entropy regularization, quantization noise, and Adaptive Deviation for Quantization (ADQ) module to enhance the search procedure. The entropy regularization technique prioritizes a single operation within each block of the search space. Adding quantization noise to parameters and activations approximates model degradation after quantization, resulting in a more quantization-friendly architectures. ADQ helps to alleviate problems caused by Batch Norm blocks in super-resolution models. Our experimental results show that the proposed approximations are better for search procedure than direct model quantization. QuantNAS discovers architectures with better PSNR/BitOps trade-off than uniform or mixed precision quantization of fixed architectures. We showcase the effectiveness of our method through its application to two search spaces inspired by the state-of-the-art SR models and RFDN. Thus, anyone can design a proper search space based on an existing architecture and apply our method to obtain better quality and efficiency. The proposed procedure is 30\% faster than direct weight quantization and is more stable.

A general theory of efficient estimation for ergodic diffusion processes sampled at high frequency with an infinite time horizon is presented. High frequency sampling is common in many applications, with finance as a prominent example. The theory is formulated in term of approximate martingale estimating functions and covers a large class of estimators including most of the previously proposed estimators for diffusion processes. Easily checked conditions ensuring that an estimating function is an approximate martingale are derived, and general conditions ensuring consistency and asymptotic normality of estimators are given. Most importantly, simple conditions are given that ensure rate optimality and efficiency. Rate optimal estimators of parameters in the diffusion coefficient converge faster than estimators of drift coefficient parameters because they take advantage of the information in the quadratic variation. The conditions facilitate the choice among the multitude of estimators that have been proposed for diffusion models. Optimal martingale estimating functions in the sense of Godambe and Heyde and their high frequency approximations are, under weak conditions, shown to satisfy the conditions for rate optimality and efficiency. This provides a natural feasible method of constructing explicit rate optimal and efficient estimating functions by solving a linear equation.

This paper presents a time-causal analogue of the Gabor filter, as well as a both time-causal and time-recursive analogue of the Gabor transform, where the proposed time-causal representations obey both temporal scale covariance and a cascade property with a simplifying kernel over temporal scales. The motivation behind these constructions is to enable theoretically well-founded time-frequency analysis over multiple temporal scales for real-time situations, or for physical or biological modelling situations, when the future cannot be accessed, and the non-causal access to future in Gabor filtering is therefore not viable for a time-frequency analysis of the system. We develop the theory for these representations, obtained by replacing the Gaussian kernel in Gabor filtering with a time-causal kernel, referred to as the time-causal limit kernel, which guarantees simplification properties from finer to coarser levels of scales in a time-causal situation, similar as the Gaussian kernel can be shown to guarantee over a non-causal temporal domain. In these ways, the proposed time-frequency representations guarantee well-founded treatment over multiple scales, in situations when the characteristic scales in the signals, or physical or biological phenomena, to be analyzed may vary substantially, and additionally all steps in the time-frequency analysis have to be fully time-causal.

Fault detection and isolation in complex systems are critical to ensure reliable and efficient operation. However, traditional fault detection methods often struggle with issues such as nonlinearity and multivariate characteristics of the time series variables. This article proposes a generative adversarial wavelet neural operator (GAWNO) as a novel unsupervised deep learning approach for fault detection and isolation of multivariate time series processes.The GAWNO combines the strengths of wavelet neural operators and generative adversarial networks (GANs) to effectively capture both the temporal distributions and the spatial dependencies among different variables of an underlying system. The approach of fault detection and isolation using GAWNO consists of two main stages. In the first stage, the GAWNO is trained on a dataset of normal operating conditions to learn the underlying data distribution. In the second stage, a reconstruction error-based threshold approach using the trained GAWNO is employed to detect and isolate faults based on the discrepancy values. We validate the proposed approach using the Tennessee Eastman Process (TEP) dataset and Avedore wastewater treatment plant (WWTP) and N2O emissions named as WWTPN2O datasets. Overall, we showcase that the idea of harnessing the power of wavelet analysis, neural operators, and generative models in a single framework to detect and isolate faults has shown promising results compared to various well-established baselines in the literature.

Invariant causal prediction (ICP, Peters et al. (2016)) provides a novel way to identify causal predictors of a response by utilizing heterogeneous data from different environments. One advantage of ICP is that it guarantees to make no false causal discoveries with high probability. Such a guarantee, however, can be too conservative in some applications, resulting in few or no discoveries. To address this, we propose simultaneous false discovery bounds for ICP, which provides users with extra flexibility in exploring causal predictors and can extract more informative results. These additional inferences come for free, in the sense that they do not require additional assumptions, and the same information obtained by the original ICP is retained. We demonstrate the practical usage of our method through simulations and a real dataset.

We propose a deep neural network (DNN) based least distance (LD) estimator (DNN-LD) for a multivariate regression problem, addressing the limitations of the conventional methods. Due to the flexibility of a DNN structure, both linear and nonlinear conditional mean functions can be easily modeled, and a multivariate regression model can be realized by simply adding extra nodes at the output layer. The proposed method is more efficient in capturing the dependency structure among responses than the least squares loss, and robust to outliers. In addition, we consider $L_1$-type penalization for variable selection, crucial in analyzing high-dimensional data. Namely, we propose what we call (A)GDNN-LD estimator that enjoys variable selection and model estimation simultaneously, by applying the (adaptive) group Lasso penalty to weight parameters in the DNN structure. For the computation, we propose a quadratic smoothing approximation method to facilitate optimizing the non-smooth objective function based on the least distance loss. The simulation studies and a real data analysis demonstrate the promising performance of the proposed method.

We present a Bayesian method for multivariate changepoint detection that allows for simultaneous inference on the location of a changepoint and the coefficients of a logistic regression model for distinguishing pre-changepoint data from post-changepoint data. In contrast to many methods for multivariate changepoint detection, the proposed method is applicable to data of mixed type and avoids strict assumptions regarding the distribution of the data and the nature of the change. The regression coefficients provide an interpretable description of a potentially complex change. For posterior inference, the model admits a simple Gibbs sampling algorithm based on P\'olya-gamma data augmentation. We establish conditions under which the proposed method is guaranteed to recover the true underlying changepoint. As a testing ground for our method, we consider the problem of detecting topological changes in time series of images. We demonstrate that the proposed method, combined with a novel topological feature embedding, performs well on both simulated and real image data.

Consider a sensor network consisting of both anchor and non-anchor nodes. We address the following sensor network localization (SNL) problem: given the physical locations of anchor nodes and relative measurements among all nodes, determine the locations of all non-anchor nodes. The solution to the SNL problem is challenging due to its inherent non-convexity. In this paper, the problem takes on the form of a multi-player non-convex potential game in which canonical duality theory is used to define a complementary dual potential function. After showing the Nash equilibrium (NE) correspondent to the SNL solution, we provide a necessary and sufficient condition for a stationary point to coincide with the NE. An algorithm is proposed to reach the NE and shown to have convergence rate $\mathcal{O}(1/\sqrt{k})$. With the aim of reducing the information exchange within a network, a distributed algorithm for NE seeking is implemented and its global convergence analysis is provided. Extensive simulations show the validity and effectiveness of the proposed approach to solve the SNL problem.

Time Series Classification (TSC) is an important and challenging problem in data mining. With the increase of time series data availability, hundreds of TSC algorithms have been proposed. Among these methods, only a few have considered Deep Neural Networks (DNNs) to perform this task. This is surprising as deep learning has seen very successful applications in the last years. DNNs have indeed revolutionized the field of computer vision especially with the advent of novel deeper architectures such as Residual and Convolutional Neural Networks. Apart from images, sequential data such as text and audio can also be processed with DNNs to reach state-of-the-art performance for document classification and speech recognition. In this article, we study the current state-of-the-art performance of deep learning algorithms for TSC by presenting an empirical study of the most recent DNN architectures for TSC. We give an overview of the most successful deep learning applications in various time series domains under a unified taxonomy of DNNs for TSC. We also provide an open source deep learning framework to the TSC community where we implemented each of the compared approaches and evaluated them on a univariate TSC benchmark (the UCR/UEA archive) and 12 multivariate time series datasets. By training 8,730 deep learning models on 97 time series datasets, we propose the most exhaustive study of DNNs for TSC to date.

北京阿比特科技有限公司