亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose model-free (nonparametric) estimators of the volatility of volatility and leverage effect using high-frequency observations of short-dated options. At each point in time, we integrate available options into estimates of the conditional characteristic function of the price increment until the options' expiration and we use these estimates to recover spot volatility. Our volatility of volatility estimator is then formed from the sample variance and first-order autocovariance of the spot volatility increments, with the latter correcting for the bias in the former due to option observation errors. The leverage effect estimator is the sample covariance between price increments and the estimated volatility increments. The rate of convergence of the estimators depends on the diffusive innovations in the latent volatility process as well as on the observation error in the options with strikes in the vicinity of the current spot price. Feasible inference is developed in a way that does not require prior knowledge of the source of estimation error that is asymptotically dominating.

相關內容

Quantum density matrix represents all the information of the entire quantum system, and novel models of meaning employing density matrices naturally model linguistic phenomena such as hyponymy and linguistic ambiguity, among others in quantum question answering tasks. Naturally, we argue that applying the quantum density matrix into classical Question Answering (QA) tasks can show more effective performance. Specifically, we (i) design a new mechanism based on Long Short-Term Memory (LSTM) to accommodate the case when the inputs are matrixes; (ii) apply the new mechanism to QA problems with Convolutional Neural Network (CNN) and gain the LSTM-based QA model with the quantum density matrix. Experiments of our new model on TREC-QA and WIKI-QA data sets show encouraging results. Similarly, we argue that the quantum density matrix can also enhance the image feature information and the relationship between the features for the classical image classification. Thus, we (i) combine density matrices and CNN to design a new mechanism; (ii) apply the new mechanism to some representative classical image classification tasks. A series of experiments show that the application of quantum density matrix in image classification has the generalization and high efficiency on different datasets. The application of quantum density matrix both in classical question answering tasks and classical image classification tasks show more effective performance.

Accurate and high-resolution Earth system model (ESM) simulations are essential to assess the ecological and socio-economic impacts of anthropogenic climate change, but are computationally too expensive. Recent machine learning approaches have shown promising results in downscaling ESM simulations, outperforming state-of-the-art statistical approaches. However, existing methods require computationally costly retraining for each ESM and extrapolate poorly to climates unseen during training. We address these shortcomings by learning a consistency model (CM) that efficiently and accurately downscales arbitrary ESM simulations without retraining in a zero-shot manner. Our foundation model approach yields probabilistic downscaled fields at resolution only limited by the observational reference data. We show that the CM outperforms state-of-the-art diffusion models at a fraction of computational cost while maintaining high controllability on the downscaling task. Further, our method generalizes to climate states unseen during training without explicitly formulated physical constraints.

The restricted isometry property (RIP) is essential for the linear map to guarantee the successful recovery of low-rank matrices. The existing works show that the linear map generated by the measurement matrices with independent and identically distributed (i.i.d.) entries satisfies RIP with high probability. However, when dealing with non-i.i.d. measurement matrices, such as the rank-one measurements, the RIP compliance may not be guaranteed. In this paper, we show that the RIP can still be achieved with high probability, when the rank-one measurement matrix is constructed by the random unit-modulus vectors. Compared to the existing works, we first address the challenge of establishing RIP for the linear map in non-i.i.d. scenarios. As validated in the experiments, this linear map is memory-efficient, and not only satisfies the RIP but also exhibits similar recovery performance of the low-rank matrices to that of conventional i.i.d. measurement matrices.

Many concepts for future generations of wireless communication systems use coherent processing of signals from many distributed antennas. The aim is to improve communication reliability, capacity, and energy efficiency and provide possibilities for new applications through integrated communication and sensing. The large bandwidths available in the higher bands have inspired much work regarding sensing in the mmWave and sub-THz bands; however, the sub-6 GHz cellular bands will still be the main provider of wide cellular coverage due to the more favorable propagation conditions. In this paper, we present a measurement system and results of sub-6 GHz distributed MIMO measurements performed in an industrial environment. From the measurements, we evaluated the diversity for both large-scale and small-scale fading and characterized the link reliability. We also analyzed the possibility of multistatic sensing and positioning of users in the environment, with the initial results showing a mean-square error below 20 cm on the estimated position. Further, the results clearly showed that new channel models are needed that are spatially consistent and deal with the nonstationary channel properties among the antennas.

Likelihood-based deep generative models such as score-based diffusion models and variational autoencoders are state-of-the-art machine learning models approximating high-dimensional distributions of data such as images, text, or audio. One of many downstream tasks they can be naturally applied to is out-of-distribution (OOD) detection. However, seminal work by Nalisnick et al. which we reproduce showed that deep generative models consistently infer higher log-likelihoods for OOD data than data they were trained on, marking an open problem. In this work, we analyse using the gradient of a data point with respect to the parameters of the deep generative model for OOD detection, based on the simple intuition that OOD data should have larger gradient norms than training data. We formalise measuring the size of the gradient as approximating the Fisher information metric. We show that the Fisher information matrix (FIM) has large absolute diagonal values, motivating the use of chi-square distributed, layer-wise gradient norms as features. We combine these features to make a simple, model-agnostic and hyperparameter-free method for OOD detection which estimates the joint density of the layer-wise gradient norms for a given data point. We find that these layer-wise gradient norms are weakly correlated, rendering their combined usage informative, and prove that the layer-wise gradient norms satisfy the principle of (data representation) invariance. Our empirical results indicate that this method outperforms the Typicality test for most deep generative models and image dataset pairings.

As cyber attacks continue to increase in frequency and sophistication, detecting malware has become a critical task for maintaining the security of computer systems. Traditional signature-based methods of malware detection have limitations in detecting complex and evolving threats. In recent years, machine learning (ML) has emerged as a promising solution to detect malware effectively. ML algorithms are capable of analyzing large datasets and identifying patterns that are difficult for humans to identify. This paper presents a comprehensive review of the state-of-the-art ML techniques used in malware detection, including supervised and unsupervised learning, deep learning, and reinforcement learning. We also examine the challenges and limitations of ML-based malware detection, such as the potential for adversarial attacks and the need for large amounts of labeled data. Furthermore, we discuss future directions in ML-based malware detection, including the integration of multiple ML algorithms and the use of explainable AI techniques to enhance the interpret ability of ML-based detection systems. Our research highlights the potential of ML-based techniques to improve the speed and accuracy of malware detection, and contribute to enhancing cybersecurity

Probabilistic model checking is a widely used formal verification technique to automatically verify qualitative and quantitative properties for probabilistic models. However, capturing such systems, writing corresponding properties, and verifying them require domain knowledge. This makes it not accessible for researchers and engineers who may not have the required knowledge. Previous studies have extended UML activity diagrams (ADs), developed transformations, and implemented accompanying tools for automation. The research, however, is incomprehensive and not fully open, which makes it hard to be evaluated, extended, adapted, and accessed. In this paper, we propose a comprehensive verification framework for ADs, including a new profile for probability, time, and quality annotations, a semantics interpretation of ADs in three Markov models, and a set of transformation rules from activity diagrams to the PRISM language, supported by PRISM and Storm. Most importantly, we developed algorithms for transformation and implemented them in a tool, called QASCAD, using model-based techniques, for fully automated verification. We evaluated one case study where multiple robots are used for delivery in a hospital and further evaluated six other examples from the literature. With all these together, this work makes noteworthy contributions to the verification of ADs by improving evaluation, extensibility, adaptability, and accessibility.

Traditional regulations of chemical exposure tend to focus on single exposures, overlooking the potential amplified toxicity due to multiple concurrent exposures. We are interested in understanding the average outcome if exposures were limited to fall under a multivariate threshold. Because threshold levels are often unknown \textit{a priori}, we provide an algorithm that finds exposure threshold levels where the expected outcome is maximized or minimized. Because both identifying thresholds and estimating policy effects on the same data would lead to overfitting bias, we also provide a data-adaptive estimation framework, which allows for both threshold discovery and policy estimation. Simulation studies show asymptotic convergence to the optimal exposure region and to the true effect of an intervention. We demonstrate how our method identifies true interactions in a public synthetic mixture data set. Finally, we applied our method to NHANES data to discover metal exposures that have the most harmful effects on telomere length. We provide an implementation in the \texttt{CVtreeMLE} R package.

The problem of substructure characteristic modes is reformulated using a scattering matrix-based formulation, generalizing subregion characteristic mode decomposition to arbitrary computational tools. It is shown that the scattering formulation is identical to the classical formulation based on the background Green's function for lossless systems. The scattering formulation, however, opens a variety of new subregion scenarios unavailable within previous formulations, including cases with lumped or wave ports or subregions in circuits. Thanks to its scattering nature, the formulation is solver-agnostic with the possibility to utilize an arbitrary full-wave method.

This work considers the question of how convenient access to copious data impacts our ability to learn causal effects and relations. In what ways is learning causality in the era of big data different from -- or the same as -- the traditional one? To answer this question, this survey provides a comprehensive and structured review of both traditional and frontier methods in learning causality and relations along with the connections between causality and machine learning. This work points out on a case-by-case basis how big data facilitates, complicates, or motivates each approach.

北京阿比特科技有限公司