亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose a stochastic volatility model for time series of curves. It is motivated by dynamics of intraday price curves that exhibit both between days dependence and intraday price evolution. The curves are suitably normalized to stationary in a function space and are functional analogs of point-to-point daily returns. The between curves dependence is modeled by a latent autoregression. The within curves behavior is modeled by a diffusion process. We establish the properties of the model and propose several approaches to its estimation. These approaches are justified by asymptotic arguments that involve an interplay between between the latent autoregression and the intraday diffusions. The asymptotic framework combines the increasing number of daily curves and the refinement of the discrete grid on which each daily curve is observed. Consistency rates for the estimators of the intraday volatility curves are derived as well as the asymptotic normality of the estimators of the latent autoregression. The estimation approaches are further explored and compared by an application to intraday price curves of over seven thousand U.S. stocks and an informative simulation study.

相關內容

In this paper we present a method for single-channel wind noise reduction using our previously proposed diffusion-based stochastic regeneration model combining predictive and generative modelling. We introduce a non-additive speech in noise model to account for the non-linear deformation of the membrane caused by the wind flow and possible clipping. We show that our stochastic regeneration model outperforms other neural-network-based wind noise reduction methods as well as purely predictive and generative models, on a dataset using simulated and real-recorded wind noise. We further show that the proposed method generalizes well by testing on an unseen dataset with real-recorded wind noise. Audio samples, data generation scripts and code for the proposed methods can be found online (//uhh.de/inf-sp-storm-wind).

Latent variable models are powerful tools for modeling complex phenomena involving in particular partially observed data, unobserved variables or underlying complex unknown structures. Inference is often difficult due to the latent structure of the model. To deal with parameter estimation in the presence of latent variables, well-known efficient methods exist, such as gradient-based and EM-type algorithms, but with practical and theoretical limitations. In this paper, we propose as an alternative for parameter estimation an efficient preconditioned stochastic gradient algorithm. Our method includes a preconditioning step based on a positive definite Fisher information matrix estimate. We prove convergence results for the proposed algorithm under mild assumptions for very general latent variables models. We illustrate through relevant simulations the performance of the proposed methodology in a nonlinear mixed effects model and in a stochastic block model.

Dual-Primal Finite Element Tearing and Interconnecting (FETI-DP) algorithms are developed for a 2D Biot model. The model is formulated with mixed-finite elements as a saddle-point problem. The displacement $\mathbf{u}$ and the Darcy flux flow $\mathbf{z}$ are represented with $P_1$ piecewise continuous elements and pore-pressure $p$ with $P_0$ piecewise constant elements, {\it i.e.}, overall three fields with a stabilizing term. We have tested the functionality of FETI-DP with Dirichlet preconditioners. Numerical experiments show a signature of scalability of the resulting parallel algorithm in the compressible elasticity with permeable Darcy flow as well as almost incompressible elasticity.

Reinforcement Learning (RL) algorithms have shown tremendous success in simulation environments, but their application to real-world problems faces significant challenges, with safety being a major concern. In particular, enforcing state-wise constraints is essential for many challenging tasks such as autonomous driving and robot manipulation. However, existing safe RL algorithms under the framework of Constrained Markov Decision Process (CMDP) do not consider state-wise constraints. To address this gap, we propose State-wise Constrained Policy Optimization (SCPO), the first general-purpose policy search algorithm for state-wise constrained reinforcement learning. SCPO provides guarantees for state-wise constraint satisfaction in expectation. In particular, we introduce the framework of Maximum Markov Decision Process, and prove that the worst-case safety violation is bounded under SCPO. We demonstrate the effectiveness of our approach on training neural network policies for extensive robot locomotion tasks, where the agent must satisfy a variety of state-wise safety constraints. Our results show that SCPO significantly outperforms existing methods and can handle state-wise constraints in high-dimensional robotics tasks.

This paper aims to characterize the typical factual characteristics of financial market returns and volatility and address the problem that the tail characteristics of asset returns have been not sufficiently considered, as an attempt to more effectively avoid risks and productively manage stock market risks. Thus, in this paper, the fat-tailed distribution and the leverage effect are introduced into the SV model. Next, the model parameters are estimated through MCMC. Subsequently, the fat-tailed distribution of financial market returns is comprehensively characterized and then incorporated with extreme value theory to fit the tail distribution of standard residuals. Afterward, a new financial risk measurement model is built, which is termed the SV-EVT-VaR-based dynamic model. With the use of daily S&P 500 index and simulated returns, the empirical results are achieved, which reveal that the SV-EVT-based models can outperform other models for out-of-sample data in backtesting and depicting the fat-tailed property of financial returns and leverage effect.

Sparse functional/longitudinal data have attracted widespread interest due to the prevalence of such data in social and life sciences. A prominent scenario where such data are routinely encountered are accelerated longitudinal studies, where subjects are enrolled in the study at a random time and are only tracked for a short amount of time relative to the domain of interest. The statistical analysis of such functional snippets is challenging since information for the far-off-diagonal regions of the covariance structure is missing. Our main methodological contribution is to address this challenge by bypassing covariance estimation and instead modeling the underlying process as the solution of a data-adaptive stochastic differential equation. Taking advantage of the interface between Gaussian functional data and stochastic differential equations makes it possible to efficiently reconstruct the target process by estimating its dynamic distribution. The proposed approach allows one to consistently recover forward sample paths from functional snippets at the subject level. We establish the existence and uniqueness of the solution to the proposed data-driven stochastic differential equation and derive rates of convergence for the corresponding estimators. The finite-sample performance is demonstrated with simulation studies and functional snippets arising from a growth study and spinal bone mineral density data.

Learning interpretable representations of neural dynamics at a population level is a crucial first step to understanding how observed neural activity relates to perception and behavior. Models of neural dynamics often focus on either low-dimensional projections of neural activity, or on learning dynamical systems that explicitly relate to the neural state over time. We discuss how these two approaches are interrelated by considering dynamical systems as representative of flows on a low-dimensional manifold. Building on this concept, we propose a new decomposed dynamical system model that represents complex non-stationary and nonlinear dynamics of time series data as a sparse combination of simpler, more interpretable components. Our model is trained through a dictionary learning procedure, where we leverage recent results in tracking sparse vectors over time. The decomposed nature of the dynamics is more expressive than previous switched approaches for a given number of parameters and enables modeling of overlapping and non-stationary dynamics. In both continuous-time and discrete-time instructional examples we demonstrate that our model can well approximate the original system, learn efficient representations, and capture smooth transitions between dynamical modes, focusing on intuitive low-dimensional non-stationary linear and nonlinear systems. Furthermore, we highlight our model's ability to efficiently capture and demix population dynamics generated from multiple independent subnetworks, a task that is computationally impractical for switched models. Finally, we apply our model to neural "full brain" recordings of C. elegans data, illustrating a diversity of dynamics that is obscured when classified into discrete states.

For ultra-reliable, low-latency communications (URLLC) applications such as mission-critical industrial control and extended reality (XR), it is important to ensure the communication quality of individual packets. Prior studies have considered Probabilistic Per-packet Real-time Communications (PPRC) guarantees for single-cell, single-channel networks, but they have not considered real-world complexities such as inter-cell interference in large-scale networks with multiple communication channels and heterogeneous real-time requirements. To fill the gap, we propose a real-time scheduling algorithm based on \emph{local-deadline-partition (LDP)}, and the LDP algorithm ensures PPRC guarantee for large-scale, multi-channel networks with heterogeneous real-time constraints. We also address the associated challenge of schedulability test. In particular, we propose the concept of \emph{feasible set}, identify a closed-form sufficient condition for the schedulability of PPRC traffic, and then propose an efficient distributed algorithm for the schedulability test. We numerically study the properties of the LDP algorithm and observe that it significantly improves the network capacity of URLLC, for instance, by a factor of 5-20 as compared with a typical method. Furthermore, the PPRC traffic supportable by the LDP algorithm is significantly higher than that of state-of-the-art comparison schemes. This demonstrates the potential of fine-grained scheduling algorithms for URLLC wireless systems regarding interference scenarios.

We propose a family of recursive cutting-plane algorithms to solve feasibility problems with constrained memory, which can also be used for first-order convex optimization. Precisely, in order to find a point within a ball of radius $\epsilon$ with a separation oracle in dimension $d$ -- or to minimize $1$-Lipschitz convex functions to accuracy $\epsilon$ over the unit ball -- our algorithms use $\mathcal O(\frac{d^2}{p}\ln \frac{1}{\epsilon})$ bits of memory, and make $\mathcal O((C\frac{d}{p}\ln \frac{1}{\epsilon})^p)$ oracle calls, for some universal constant $C \geq 1$. The family is parametrized by $p\in[d]$ and provides an oracle-complexity/memory trade-off in the sub-polynomial regime $\ln\frac{1}{\epsilon}\gg\ln d$. While several works gave lower-bound trade-offs (impossibility results) -- we explicit here their dependence with $\ln\frac{1}{\epsilon}$, showing that these also hold in any sub-polynomial regime -- to the best of our knowledge this is the first class of algorithms that provides a positive trade-off between gradient descent and cutting-plane methods in any regime with $\epsilon\leq 1/\sqrt d$. The algorithms divide the $d$ variables into $p$ blocks and optimize over blocks sequentially, with approximate separation vectors constructed using a variant of Vaidya's method. In the regime $\epsilon \leq d^{-\Omega(d)}$, our algorithm with $p=d$ achieves the information-theoretic optimal memory usage and improves the oracle-complexity of gradient descent.

Graph Convolutional Networks (GCNs) and their variants have experienced significant attention and have become the de facto methods for learning graph representations. GCNs derive inspiration primarily from recent deep learning approaches, and as a result, may inherit unnecessary complexity and redundant computation. In this paper, we reduce this excess complexity through successively removing nonlinearities and collapsing weight matrices between consecutive layers. We theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier. Notably, our experimental evaluation demonstrates that these simplifications do not negatively impact accuracy in many downstream applications. Moreover, the resulting model scales to larger datasets, is naturally interpretable, and yields up to two orders of magnitude speedup over FastGCN.

北京阿比特科技有限公司