亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Hawkes processes are a class of self-exciting point processes that are used to model complex phenomena. While most applications of Hawkes processes assume that event data occurs in continuous-time, the less-studied discrete-time version of the process is more appropriate in some situations. In this work, we develop methodology for the efficient implementation of discrete Hawkes processes. We achieve this by developing efficient algorithms to evaluate the log-likelihood function and its gradient, whose computational complexity is linear in the number of events. We extend these methods to a particular form of a multivariate marked discrete Hawkes process which we use to model the occurrences of violent events within a forensic psychiatric hospital. A prominent feature of our problem, captured by a mark in our process, is the presence of an alarm system which can be heard throughout the hospital. An alarm is sounded when an event is particularly violent in nature and warrants a call for assistance from other members of staff. We conduct a detailed analysis showing that such a variant of the Hawkes process manages to outperform alternative models in terms of predictive power. Finally, we interpret our findings and describe their implications.

相關內容

 Processing 是一門開源編程語言和與之配套的集成開發環境(IDE)的名稱。Processing 在電子藝術和視覺設計社區被用來教授編程基礎,并運用于大量的新媒體和互動藝術作品中。

The number of modes in a probability density function is representative of the model's complexity and can also be viewed as the number of existing subpopulations. Despite its relevance, little research has been devoted to its estimation. Focusing on the univariate setting, we propose a novel approach targeting prediction accuracy inspired by some overlooked aspects of the problem. We argue for the need for structure in the solutions, the subjective and uncertain nature of modes, and the convenience of a holistic view blending global and local density properties. Our method builds upon a combination of flexible kernel estimators and parsimonious compositional splines. Feature exploration, model selection and mode testing are implemented in the Bayesian inference paradigm, providing soft solutions and allowing to incorporate expert judgement in the process. The usefulness of our proposal is illustrated through a case study in sports analytics, showcasing multiple companion visualisation tools. A thorough simulation study demonstrates that traditional modality-driven approaches paradoxically struggle to provide accurate results. In this context, our method emerges as a top-tier alternative offering innovative solutions for analysts.

Relative localization is crucial for multi-robot systems to perform cooperative tasks, especially in GPS-denied environments. Current techniques for multi-robot relative localization rely on expensive or short-range sensors such as cameras and LIDARs. As a result, these algorithms face challenges such as high computational complexity, dependencies on well-structured environments, etc. To overcome these limitations, we propose a new distributed approach to perform relative localization using a Gaussian Processes map of the Radio Signal Strength Indicator (RSSI) values from a single wireless Access Point (AP) to which the robots are connected. Our approach, Gaussian Processes-based Relative Localization (GPRL), combines two pillars. First, the robots locate the AP w.r.t. their local reference frames using novel hierarchical inferencing that significantly reduces computational complexity. Secondly, the robots obtain relative positions of neighbor robots with an AP-oriented vector transformation. The approach readily applies to resource-constrained devices and relies only on the ubiquitously-available RSSI measurement. We extensively validate the performance of the two pillars of the proposed GRPL in Robotarium simulations. We also demonstrate the applicability of GPRL through a multi-robot rendezvous task with a team of three real-world robots. The results demonstrate that GPRL outperformed state-of-the-art approaches regarding accuracy, computation, and real-time performance.

Over the years of challenges on detecting the crash consistency of non-volatile persistent memory (PM) bugs and developing new tools to identify those bugs are quite stretching due to its inconsistent behavior on the file or storage systems. In this paper, we evaluated an open-sourced automatic bug detector tool (i.e. AGAMOTTO) to test NVM level hashing PM application to identify performance and correctness PM bugs in the persistent (main) memory. Furthermore, our faithful validation tool able to discovered 65 new NVM level hashing bugs on PMDK library and it outperformed the number of bugs (i.e. 40 bugs) that WITCHER framework was able to identified. Finally, we will propose a Deep-Q Learning search heuristic algorithm over the PM-Aware search algorithm in the state selection process to improve the searching strategy efficiently.

The Metaverse is rapidly evolving, bringing us closer to its imminent reality. However, the widespread adoption of this new automated technology poses significant research challenges in terms of authenticity, integrity, interoperability, and efficiency. These challenges originate from the core technologies underlying the Metaverse and are exacerbated by its complex nature. As a solution to these challenges, this paper presents a novel framework based on Non-Fungible Tokens (NFTs). The framework employs the Proof-of-Stake consensus algorithm, a blockchain-based technology, for data transaction, validation, and resource management. PoS efficiently consume energy and provide a streamlined validation approach instead of resource-intensive mining. This ability makes PoS an ideal candidate for Metaverse applications. By combining NFTs for user authentication and PoS for data integrity, enhanced transaction throughput, and improved scalability, the proposed blockchain mechanism demonstrates noteworthy advantages. Through security analysis, experimental and simulation results, it is established that the NFT-based approach coupled with the PoS algorithm is secure and efficient for Metaverse applications.

This article introduces the class of periodic trawl processes, which are continuous-time, infinitely divisible, stationary stochastic processes, that allow for periodicity and flexible forms of their serial correlation, including both short- and long-memory settings. We derive some of the key probabilistic properties of periodic trawl processes and present relevant examples. Moreover, we show how such processes can be simulated and establish the asymptotic theory for their sample mean and sample autocovariances. Consequently, we prove the asymptotic normality of a (generalised) method-of-moments estimator for the model parameters. We illustrate the new model and estimation methodology in an application to electricity prices.

Population size estimation based on the capture-recapture experiment is an interesting problem in various fields including epidemiology, criminology, demography, etc. In many real-life scenarios, there exists inherent heterogeneity among the individuals and dependency between capture and recapture attempts. A novel trivariate Bernoulli model is considered to incorporate these features, and the Bayesian estimation of the model parameters is suggested using data augmentation. Simulation results show robustness under model misspecification and the superiority of the performance of the proposed method over existing competitors. The method is applied to analyse real case studies on epidemiological surveillance. The results provide interesting insight on the heterogeneity and dependence involved in the capture-recapture mechanism. The methodology proposed can assist in effective decision-making and policy formulation.

The central problem we address in this work is estimation of the parameter support set S, the set of indices corresponding to nonzero parameters, in the context of a sparse parametric likelihood model for count-valued multivariate time series. We develop a computationally-intensive algorithm that performs the estimation by aggregating support sets obtained by applying the LASSO to data subsamples. Our approach is to identify several well-fitting candidate models and estimate S by the most frequently-used parameters, thus \textit{aggregating} candidate models rather than selecting a single candidate deemed optimal in some sense. While our method is broadly applicable to any selection problem, we focus on the generalized vector autoregressive model class, and in particular the Poisson case, due to (i) the difficulty of the support estimation problem due to complex dependence in the data, (ii) recent work applying the LASSO in this context, and (iii) interesting applications in network recovery from discrete multivariate time series. We establish benchmark methods based on the LASSO and present empirical results demonstrating the superior performance of our method. Additionally, we present an application estimating ecological interaction networks from paleoclimatology data.

A point process for event arrivals in high frequency trading is presented. The intensity is the product of a Hawkes process and high dimensional functions of covariates derived from the order book. Conditions for stationarity of the process are stated. An algorithm is presented to estimate the model even in the presence of billions of data points, possibly mapping covariates into a high dimensional space. The large sample size can be common for high frequency data applications using multiple liquid instruments. Convergence of the algorithm is shown, consistency results under weak conditions is established, and a test statistic to assess out of sample performance of different model specifications is suggested. The methodology is applied to the study of four stocks that trade on the New York Stock Exchange (NYSE). The out of sample testing procedure suggests that capturing the nonlinearity of the order book information adds value to the self exciting nature of high frequency trading events.

Multi-tenancy in public clouds may lead to co-location interference on shared resources, which possibly results in performance degradation of cloud applications. Cloud providers want to know when such events happen and how serious the degradation is, to perform interference-aware migrations and alleviate the problem. However, virtual machines (VM) in Infrastructure-as-a-Service public clouds are black-boxes to providers, where application-level performance information cannot be acquired. This makes performance monitoring intensely challenging as cloud providers can only rely on low-level metrics such as CPU usage and hardware counters. We propose a novel machine learning framework, Alioth, to monitor the performance degradation of cloud applications. To feed the data-hungry models, we first elaborate interference generators and conduct comprehensive co-location experiments on a testbed to build Alioth-dataset which reflects the complexity and dynamicity in real-world scenarios. Then we construct Alioth by (1) augmenting features via recovering low-level metrics under no interference using denoising auto-encoders, (2) devising a transfer learning model based on domain adaptation neural network to make models generalize on test cases unseen in offline training, and (3) developing a SHAP explainer to automate feature selection and enhance model interpretability. Experiments show that Alioth achieves an average mean absolute error of 5.29% offline and 10.8% when testing on applications unseen in the training stage, outperforming the baseline methods. Alioth is also robust in signaling quality-of-service violation under dynamicity. Finally, we demonstrate a possible application of Alioth's interpretability, providing insights to benefit the decision-making of cloud operators. The dataset and code of Alioth have been released on GitHub.

Graphs are important data representations for describing objects and their relationships, which appear in a wide diversity of real-world scenarios. As one of a critical problem in this area, graph generation considers learning the distributions of given graphs and generating more novel graphs. Owing to their wide range of applications, generative models for graphs, which have a rich history, however, are traditionally hand-crafted and only capable of modeling a few statistical properties of graphs. Recent advances in deep generative models for graph generation is an important step towards improving the fidelity of generated graphs and paves the way for new kinds of applications. This article provides an extensive overview of the literature in the field of deep generative models for graph generation. Firstly, the formal definition of deep generative models for the graph generation and the preliminary knowledge are provided. Secondly, taxonomies of deep generative models for both unconditional and conditional graph generation are proposed respectively; the existing works of each are compared and analyzed. After that, an overview of the evaluation metrics in this specific domain is provided. Finally, the applications that deep graph generation enables are summarized and five promising future research directions are highlighted.

北京阿比特科技有限公司