亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the continuous time limit of a self-exciting negative binomial process and discuss the critical properties of its intensity distribution. In this limit, the process transforms into a marked Hawkes process. The probability mass function of the marks has a parameter $\omega$, and the process reduces to a "pure" Hawkes process in the limit $\omega\to 0$. We investigate the Lagrange--Charpit equations for the master equations of the marked Hawkes process in the Laplace representation close to its critical point and extend the previous findings on the power-law scaling of the probability density function (PDF) of intensities in the intermediate asymptotic regime to the case where the memory kernel is the superposition of an arbitrary finite number of exponentials. We develop an efficient sampling method for the marked Hawkes process based on the time-rescaling theorem and verify the power-law exponents.

相關內容

Processing 是一門開源編(bian)程語(yu)言和與之配(pei)套的集成開發環(huan)境(IDE)的名稱(cheng)。Processing 在(zai)電子藝(yi)術(shu)(shu)和視覺設計(ji)社區被用來教授(shou)編(bian)程基(ji)礎,并(bing)運(yun)用于大量(liang)的新媒體和互動藝(yi)術(shu)(shu)作(zuo)品(pin)中(zhong)。

The heterogeneous, geographically distributed infrastructure of fog computing poses challenges in data replication, data distribution, and data mobility for fog applications. Fog computing is still missing the necessary abstractions to manage application data, and fog application developers need to re-implement data management for every new piece of software. Proposed solutions are limited to certain application domains, such as the IoT, are not flexible in regard to network topology, or do not provide the means for applications to control the movement of their data. In this paper, we present FReD, a data replication middleware for the fog. FReD serves as a building block for configurable fog data distribution and enables low-latency, high-bandwidth, and privacy-sensitive applications. FReD is a common data access interface across heterogeneous infrastructure and network topologies, provides transparent and controllable data distribution, and can be integrated with applications from different domains. To evaluate our approach, we present a prototype implementation of FReD and show the benefits of developing with FReD using three case studies of fog computing applications.

This paper presents a decentralized algorithm for solving distributed convex optimization problems in dynamic networks with time-varying objectives. The unique feature of the algorithm lies in its ability to accommodate a wide range of communication systems, including previously unsupported ones, by abstractly modeling the information exchange in the network. Specifically, it supports a novel communication protocol based on the "over-the-air" function computation (OTA-C) technology, that is designed for an efficient and truly decentralized implementation of the consensus step of the algorithm. Unlike existing OTA-C protocols, the proposed protocol does not require the knowledge of network graph structure or channel state information, making it particularly suitable for decentralized implementation over ultra-dense wireless networks with time-varying topologies and fading channels. Furthermore, the proposed algorithm synergizes with the "superiorization" methodology, allowing the development of new distributed algorithms with enhanced performance for the intended applications. The theoretical analysis establishes sufficient conditions for almost sure convergence of the algorithm to a common time-invariant solution for all agents, assuming such a solution exists. Our algorithm is applied to a real-world distributed random field estimation problem, showcasing its efficacy in terms of convergence speed, scalability, and spectral efficiency. Furthermore, we present a superiorized version of our algorithm that achieves faster convergence with significantly reduced energy consumption compared to the unsuperiorized algorithm.

Offline Reinforcement Learning (RL) methods leverage previous experiences to learn better policies than the behavior policy used for experience collection. In contrast to behavior cloning, which assumes the data is collected from expert demonstrations, offline RL can work with non-expert data and multimodal behavior policies. However, offline RL algorithms face challenges in handling distribution shifts and effectively representing policies due to the lack of online interaction during training. Prior work on offline RL uses conditional diffusion models to obtain expressive policies to represent multimodal behavior in the dataset. Nevertheless, they are not tailored toward alleviating the out-of-distribution state generalization. We introduce a novel method incorporating state reconstruction feature learning in the recent class of diffusion policies to address the out-of-distribution generalization problem. State reconstruction loss promotes more descriptive representation learning of states to alleviate the distribution shift incurred by the out-of-distribution states. We design a 2D Multimodal Contextual Bandit environment to demonstrate and evaluate our proposed model. We assess the performance of our model not only in this new environment but also on several D4RL benchmark tasks, achieving state-of-the-art results.

Remaining Useful Life (RUL) prediction is a critical task that aims to estimate the amount of time until a system fails, where the latter is formed by three main components, that is, the application, communication network, and RUL logic. In this paper, we provide an end-to-end analysis of an entire RUL-based chain. Specifically, we consider a factory floor where Automated Guided Vehicles (AGVs) transport dangerous liquids whose fall may cause injuries to workers. Regarding the communication infrastructure, the AGVs are equipped with 5G User Equipments (UEs) that collect real-time data of their movements and send them to an application server. The RUL logic consists of a Deep Learning (DL)-based pipeline that assesses if there will be liquid falls by analyzing the collected data, and, eventually, sending commands to the AGVs to avoid such a danger. According to this scenario, we performed End-to-End 5G NR-compliant network simulations to study the Round-Trip Time (RTT) as a function of the overall system bandwidth, subcarrier spacing, and modulation order. Then, via real-world experiments, we collect data to train, test and compare 7 DL models and 1 baseline threshold-based algorithm in terms of cost and average advance. Finally, we assess whether or not the RTT provided by four different 5G NR network architectures is compatible with the average advance provided by the best-performing one-Dimensional Convolutional Neural Network (1D-CNN). Numerical results show under which conditions the DL-based approach for RUL estimation matches with the RTT performance provided by different 5G NR network architectures.

We expect the generalization error to improve with more samples from a similar task, and to deteriorate with more samples from an out-of-distribution (OOD) task. In this work, we show a counter-intuitive phenomenon: the generalization error of a task can be a non-monotonic function of the number of OOD samples. As the number of OOD samples increases, the generalization error on the target task improves before deteriorating beyond a threshold. In other words, there is value in training on small amounts of OOD data. We use Fisher's Linear Discriminant on synthetic datasets and deep networks on computer vision benchmarks such as MNIST, CIFAR-10, CINIC-10, PACS and DomainNet to demonstrate and analyze this phenomenon. In the idealistic setting where we know which samples are OOD, we show that these non-monotonic trends can be exploited using an appropriately weighted objective of the target and OOD empirical risk. While its practical utility is limited, this does suggest that if we can detect OOD samples, then there may be ways to benefit from them. When we do not know which samples are OOD, we show how a number of go-to strategies such as data-augmentation, hyper-parameter optimization, and pre-training are not enough to ensure that the target generalization error does not deteriorate with the number of OOD samples in the dataset.

This article inspects whether a multivariate distribution is different from a specified distribution or not, and it also tests the equality of two multivariate distributions. In the course of this study, a graphical tool-kit using well-known half-spaced depth based information criteria is proposed, which is a two-dimensional plot, regardless of the dimension of the data, and it is even useful in comparing high-dimensional distributions. The simple interpretability of the proposed graphical tool-kit motivates us to formulate test statistics to carry out the corresponding testing of hypothesis problems. It is established that the proposed tests based on the same information criteria are consistent, and moreover, the asymptotic distributions of the test statistics under contiguous/local alternatives are derived, which enable us to compute the asymptotic power of these tests. Furthermore, it is observed that the computations associated with the proposed tests are unburdensome. Besides, these tests perform better than many other tests available in the literature when data are generated from various distributions such as heavy tailed distributions, which indicates that the proposed methodology is robust as well. Finally, the usefulness of the proposed graphical tool-kit and tests is shown on two benchmark real data sets.

Classical physical modelling with associated numerical simulation (model-based), and prognostic methods based on the analysis of large amounts of data (data-driven) are the two most common methods used for the mapping of complex physical processes. In recent years, the efficient combination of these approaches has become increasingly important. Continuum mechanics in the core consists of conservation equations that -- in addition to the always necessary specification of the process conditions -- can be supplemented by phenomenological material models. The latter are an idealized image of the specific material behavior that can be determined experimentally, empirically, and based on a wealth of expert knowledge. The more complex the material, the more difficult the calibration is. This situation forms the starting point for this work's hybrid data-driven and model-based approach for mapping a complex physical process in continuum mechanics. Specifically, we use data generated from a classical physical model by the MESHFREE software to train a Principal Component Analysis-based neural network (PCA-NN) for the task of parameter identification of the material model parameters. The obtained results highlight the potential of deep-learning-based hybrid models for determining parameters, which are the key to characterizing materials occurring naturally, and their use in industrial applications (e.g. the interaction of vehicles with sand).

Electroencephalogram (EEG) signals reflect brain activity across different brain states, characterized by distinct frequency distributions. Through multifractal analysis tools, we investigate the scaling behaviour of different classes of EEG signals and artifacts. We show that brain states associated to sleep and general anaesthesia are not in general characterized by scale invariance. The lack of scale invariance motivates the development of artifact removal algorithms capable of operating independently at each scale. We examine here the properties of the wavelet quantile normalization algorithm, a recently introduced adaptive method for real-time correction of transient artifacts in EEG signals. We establish general results regarding the regularization properties of the WQN algorithm, showing how it can eliminate singularities introduced by artefacts, and we compare it to traditional thresholding algorithms. Furthermore, we show that the algorithm performance is independent of the wavelet basis. We finally examine its continuity and boundedness properties and illustrate its distinctive non-local action on the wavelet coefficients through pathological examples.

This paper presents a novel Importance Sampling (IS) scheme for estimating distribution tails of performance measures modeled with a rich set of tools such as linear programs, integer linear programs, piecewise linear/quadratic objectives, feature maps specified with deep neural networks, etc. The conventional approach of explicitly identifying efficient changes of measure suffers from feasibility and scalability concerns beyond highly stylized models, due to their need to be tailored intricately to the objective and the underlying probability distribution. This bottleneck is overcome in the proposed scheme with an elementary transformation which is capable of implicitly inducing an effective IS distribution in a variety of models by replicating the concentration properties observed in less rare samples. This novel approach is guided by developing a large deviations principle that brings out the phenomenon of self-similarity of optimal IS distributions. The proposed sampler is the first to attain asymptotically optimal variance reduction across a spectrum of multivariate distributions despite being oblivious to the specifics of the underlying model. Its applicability is illustrated with contextual shortest path and portfolio credit risk models informed by neural networks

The demand for artificial intelligence has grown significantly over the last decade and this growth has been fueled by advances in machine learning techniques and the ability to leverage hardware acceleration. However, in order to increase the quality of predictions and render machine learning solutions feasible for more complex applications, a substantial amount of training data is required. Although small machine learning models can be trained with modest amounts of data, the input for training larger models such as neural networks grows exponentially with the number of parameters. Since the demand for processing training data has outpaced the increase in computation power of computing machinery, there is a need for distributing the machine learning workload across multiple machines, and turning the centralized into a distributed system. These distributed systems present new challenges, first and foremost the efficient parallelization of the training process and the creation of a coherent model. This article provides an extensive overview of the current state-of-the-art in the field by outlining the challenges and opportunities of distributed machine learning over conventional (centralized) machine learning, discussing the techniques used for distributed machine learning, and providing an overview of the systems that are available.

北京阿比特科技有限公司