亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Wireless Body Area Networks (WBANs) comprise a network of sensors subcutaneously implanted or placed near the body surface and facilitate continuous monitoring of health parameters of a patient. Research endeavours involving WBAN are directed towards effective transmission of detected parameters to a Local Processing Unit (LPU, usually a mobile device) and analysis of the parameters at the LPU or a back-end cloud. An important concern in WBAN is the lightweight nature of WBAN nodes and the need to conserve their energy. This is especially true for subcutaneously implanted nodes that cannot be recharged or regularly replaced. Work in energy conservation is mostly aimed at optimising the routing of signals to minimise energy expended. In this paper, a simple yet innovative approach to energy conservation and detection of alarming health status is proposed. Energy conservation is ensured through a two-tier approach wherein the first tier eliminates `uninteresting' health parameter readings at the site of a sensing node and prevents these from being transmitted across the WBAN to the LPU. A reading is categorised as uninteresting if it deviates very slightly from its immediately preceding reading and does not provide new insight on the patient's well being. In addition to this, readings that are faulty and emanate from possible sensor malfunctions are also eliminated. These eliminations are done at the site of the sensor using algorithms that are light enough to effectively function in the extremely resource-constrained environments of the sensor nodes. We notice, through experiments, that this eliminates and thus reduces around 90% of the readings that need to be transmitted to the LPU leading to significant energy savings. Furthermore, the proper functioning of these algorithms in such constrained environments is confirmed and validated over a hardware simulation set up. The second tier of assessment includes a proposed anomaly detection model at the LPU that is capable of identifying anomalies from streaming health parameter readings and indicates an adverse medical condition. In addition to being able to handle streaming data, the model works within the resource-constrained environments of an LPU and eliminates the need of transmitting the data to a back-end cloud, ensuring further energy savings. The anomaly detection capability of the model is validated using data available from the critical care units of hospitals and is shown to be superior to other anomaly detection techniques.

相關內容

在(zai)數(shu)據(ju)(ju)(ju)(ju)挖掘中(zhong),異(yi)(yi)(yi)常(chang)(chang)檢(jian)(jian)測(ce)(英(ying)語:anomaly detection)對(dui)不(bu)(bu)(bu)符合預期模(mo)式(shi)或數(shu)據(ju)(ju)(ju)(ju)集(ji)中(zhong)其他(ta)項(xiang)目的(de)(de)(de)項(xiang)目、事件(jian)或觀測(ce)值的(de)(de)(de)識別(bie)。通(tong)常(chang)(chang)異(yi)(yi)(yi)常(chang)(chang)項(xiang)目會(hui)轉(zhuan)變成(cheng)銀行欺詐、結構缺(que)陷、醫療問(wen)題、文(wen)本錯(cuo)誤(wu)等類(lei)(lei)(lei)(lei)型(xing)的(de)(de)(de)問(wen)題。異(yi)(yi)(yi)常(chang)(chang)也被稱為離群(qun)值、新奇、噪聲、偏(pian)差和例外。 特別(bie)是在(zai)檢(jian)(jian)測(ce)濫用與(yu)(yu)網絡入(ru)侵時(shi),有(you)趣(qu)性(xing)(xing)(xing)對(dui)象往(wang)往(wang)不(bu)(bu)(bu)是罕見對(dui)象,但卻是超出預料的(de)(de)(de)突發(fa)活(huo)動。這(zhe)種模(mo)式(shi)不(bu)(bu)(bu)遵循通(tong)常(chang)(chang)統計定義中(zhong)把異(yi)(yi)(yi)常(chang)(chang)點看(kan)作是罕見對(dui)象,于(yu)是許(xu)多(duo)異(yi)(yi)(yi)常(chang)(chang)檢(jian)(jian)測(ce)方(fang)(fang)(fang)法(特別(bie)是無監督(du)(du)的(de)(de)(de)方(fang)(fang)(fang)法)將對(dui)此類(lei)(lei)(lei)(lei)數(shu)據(ju)(ju)(ju)(ju)失效,除非進行了(le)合適(shi)的(de)(de)(de)聚集(ji)。相反,聚類(lei)(lei)(lei)(lei)分析算(suan)法可能可以檢(jian)(jian)測(ce)出這(zhe)些模(mo)式(shi)形成(cheng)的(de)(de)(de)微聚類(lei)(lei)(lei)(lei)。 有(you)三大(da)類(lei)(lei)(lei)(lei)異(yi)(yi)(yi)常(chang)(chang)檢(jian)(jian)測(ce)方(fang)(fang)(fang)法。[1] 在(zai)假設數(shu)據(ju)(ju)(ju)(ju)集(ji)中(zhong)大(da)多(duo)數(shu)實(shi)(shi)例都(dou)是正常(chang)(chang)的(de)(de)(de)前提下(xia),無監督(du)(du)異(yi)(yi)(yi)常(chang)(chang)檢(jian)(jian)測(ce)方(fang)(fang)(fang)法能通(tong)過(guo)尋找(zhao)與(yu)(yu)其他(ta)數(shu)據(ju)(ju)(ju)(ju)最不(bu)(bu)(bu)匹(pi)配的(de)(de)(de)實(shi)(shi)例來檢(jian)(jian)測(ce)出未標記測(ce)試數(shu)據(ju)(ju)(ju)(ju)的(de)(de)(de)異(yi)(yi)(yi)常(chang)(chang)。監督(du)(du)式(shi)異(yi)(yi)(yi)常(chang)(chang)檢(jian)(jian)測(ce)方(fang)(fang)(fang)法需要一個(ge)已(yi)經被標記“正常(chang)(chang)”與(yu)(yu)“異(yi)(yi)(yi)常(chang)(chang)”的(de)(de)(de)數(shu)據(ju)(ju)(ju)(ju)集(ji),并涉及到(dao)訓練分類(lei)(lei)(lei)(lei)器(qi)(與(yu)(yu)許(xu)多(duo)其他(ta)的(de)(de)(de)統計分類(lei)(lei)(lei)(lei)問(wen)題的(de)(de)(de)關鍵區別(bie)是異(yi)(yi)(yi)常(chang)(chang)檢(jian)(jian)測(ce)的(de)(de)(de)內在(zai)不(bu)(bu)(bu)均(jun)衡性(xing)(xing)(xing))。半監督(du)(du)式(shi)異(yi)(yi)(yi)常(chang)(chang)檢(jian)(jian)測(ce)方(fang)(fang)(fang)法根據(ju)(ju)(ju)(ju)一個(ge)給(gei)定的(de)(de)(de)正常(chang)(chang)訓練數(shu)據(ju)(ju)(ju)(ju)集(ji)創建一個(ge)表示正常(chang)(chang)行為的(de)(de)(de)模(mo)型(xing),然后檢(jian)(jian)測(ce)由學(xue)習模(mo)型(xing)生成(cheng)的(de)(de)(de)測(ce)試實(shi)(shi)例的(de)(de)(de)可能性(xing)(xing)(xing)。

A functional time series approach is proposed for investigating spatial correlation in daily maximum temperature forecast errors for 111 cities spread across the U.S. The modelling of spatial correlation is most fruitful for longer forecast horizons, and becomes less relevant as the forecast horizon shrinks towards zero. For 6-day-ahead forecasts, the functional approach uncovers interpretable regional spatial effects, and captures the higher variance observed in inland cities versus coastal cities, as well as the higher variance observed in mountain and midwest states. The functional approach also naturally handles missing data through modelling a continuum, and can be implemented efficiently by exploiting the sparsity induced by a B-spline basis. The temporal dependence in the data is modeled through temporal dependence in functional basis coefficients. Independent first order autoregressions with generalized autoregressive conditional heteroskedasticity [AR(1)+GARCH(1,1)] and Student-t innovations work well to capture the persistence of basis coefficients over time and the seasonal heteroskedasticity reflecting higher variance in winter. Through exploiting autocorrelation in the basis coefficients, the functional time series approach also yields a method for improving weather forecasts and uncertainty quantification. The resulting method corrects for bias in the weather forecasts, while reducing the error variance.

Game boards are described in the Ludii general game system by their underlying graphs, based on tiling, shape and graph operators, with the automatic detection of important properties such as topological relationships between graph elements, directions and radial step sequences. This approach allows most conceivable game boards to be described simply and succinctly.

Multi-core and highly-connected architectures have become ubiquitous, and this has brought renewed interest in language-based approaches to the exploitation of parallelism. Since its inception, logic programming has been recognized as a programming paradigm with great potential for automated exploitation of parallelism. The comprehensive survey of the first twenty years of research in parallel logic programming, published in 2001, has served since as a fundamental reference to researchers and developers. The contents are quite valid today, but at the same time the field has continued evolving at a fast pace in the years that have followed. Many of these achievements and ongoing research have been driven by the rapid pace of technological innovation, that has led to advances such as very large clusters, the wide diffusion of multi-core processors, the game-changing role of general-purpose graphic processing units, and the ubiquitous adoption of cloud computing. This has been paralleled by significant advances within logic programming, such as tabling, more powerful static analysis and verification, the rapid growth of Answer Set Programming, and in general, more mature implementations and systems. This survey provides a review of the research in parallel logic programming covering the period since 2001, thus providing a natural continuation of the previous survey. The goal of the survey is to serve not only as a reference for researchers and developers of logic programming systems, but also as engaging reading for anyone interested in logic and as a useful source for researchers in parallel systems outside logic programming. Under consideration in Theory and Practice of Logic Programming (TPLP).

Between the years 2015 and 2019, members of the Horizon 2020-funded Innovative Training Network named "AMVA4NewPhysics" studied the customization and application of advanced multivariate analysis methods and statistical learning tools to high-energy physics problems, as well as developed entirely new ones. Many of those methods were successfully used to improve the sensitivity of data analyses performed by the ATLAS and CMS experiments at the CERN Large Hadron Collider; several others, still in the testing phase, promise to further improve the precision of measurements of fundamental physics parameters and the reach of searches for new phenomena. In this paper, the most relevant new tools, among those studied and developed, are presented along with the evaluation of their performances.

Wireless devices need spectrum to communicate. With the increase in the number of devices competing for the same spectrum, it has become nearly impossible to support the throughput requirements of all the devices through current spectrum sharing methods. In this work, we look at the problem of spectrum resource contention fundamentally, taking inspiration from the principles of globalization. We develop a distributed algorithm whereby the wireless nodes democratically share the spectrum resources and improve their spectral efficiency and throughput without additional power or spectrum resources. We validate the performance of our proposed democratic spectrum sharing (DSS) algorithm over real-world Wi-Fi networks and on synthetically generated networks with varying design parameters. Compared to the greedy approach, DSS achieves significant gains in throughput (~60%), area spectral efficiency ($\sim$50\%) and fairness in datarate distribution (~20%). Due to the distributed nature of the proposed algorithm, we can apply it to wireless networks of any size and density.

Recent studies on mobile network design have demonstrated the remarkable effectiveness of channel attention (e.g., the Squeeze-and-Excitation attention) for lifting model performance, but they generally neglect the positional information, which is important for generating spatially selective attention maps. In this paper, we propose a novel attention mechanism for mobile networks by embedding positional information into channel attention, which we call "coordinate attention". Unlike channel attention that transforms a feature tensor to a single feature vector via 2D global pooling, the coordinate attention factorizes channel attention into two 1D feature encoding processes that aggregate features along the two spatial directions, respectively. In this way, long-range dependencies can be captured along one spatial direction and meanwhile precise positional information can be preserved along the other spatial direction. The resulting feature maps are then encoded separately into a pair of direction-aware and position-sensitive attention maps that can be complementarily applied to the input feature map to augment the representations of the objects of interest. Our coordinate attention is simple and can be flexibly plugged into classic mobile networks, such as MobileNetV2, MobileNeXt, and EfficientNet with nearly no computational overhead. Extensive experiments demonstrate that our coordinate attention is not only beneficial to ImageNet classification but more interestingly, behaves better in down-stream tasks, such as object detection and semantic segmentation. Code is available at //github.com/Andrew-Qibin/CoordAttention.

Transformer model architectures have garnered immense interest lately due to their effectiveness across a range of domains like language, vision and reinforcement learning. In the field of natural language processing for example, Transformers have become an indispensable staple in the modern deep learning stack. Recently, a dizzying number of "X-former" models have been proposed - Reformer, Linformer, Performer, Longformer, to name a few - which improve upon the original Transformer architecture, many of which make improvements around computational and memory efficiency. With the aim of helping the avid researcher navigate this flurry, this paper characterizes a large and thoughtful selection of recent efficiency-flavored "X-former" models, providing an organized and comprehensive overview of existing work and models across multiple domains.

The ever-growing interest witnessed in the acquisition and development of unmanned aerial vehicles (UAVs), commonly known as drones in the past few years, has brought generation of a very promising and effective technology. Because of their characteristic of small size and fast deployment, UAVs have shown their effectiveness in collecting data over unreachable areas and restricted coverage zones. Moreover, their flexible-defined capacity enables them to collect information with a very high level of detail, leading to high resolution images. UAVs mainly served in military scenario. However, in the last decade, they have being broadly adopted in civilian applications as well. The task of aerial surveillance and situation awareness is usually completed by integrating intelligence, surveillance, observation, and navigation systems, all interacting in the same operational framework. To build this capability, UAV's are well suited tools that can be equipped with a wide variety of sensors, such as cameras or radars. Deep learning has been widely recognized as a prominent approach in different computer vision applications. Specifically, one-stage object detector and two-stage object detector are regarded as the most important two groups of Convolutional Neural Network based object detection methods. One-stage object detector could usually outperform two-stage object detector in speed; however, it normally trails in detection accuracy, compared with two-stage object detectors. In this study, focal loss based RetinaNet, which works as one-stage object detector, is utilized to be able to well match the speed of regular one-stage detectors and also defeat two-stage detectors in accuracy, for UAV based object detection. State-of-the-art performance result has been showed on the UAV captured image dataset-Stanford Drone Dataset (SDD).

In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.

There is a need for systems to dynamically interact with ageing populations to gather information, monitor health condition and provide support, especially after hospital discharge or at-home settings. Several smart devices have been delivered by digital health, bundled with telemedicine systems, smartphone and other digital services. While such solutions offer personalised data and suggestions, the real disruptive step comes from the interaction of new digital ecosystem, represented by chatbots. Chatbots will play a leading role by embodying the function of a virtual assistant and bridging the gap between patients and clinicians. Powered by AI and machine learning algorithms, chatbots are forecasted to save healthcare costs when used in place of a human or assist them as a preliminary step of helping to assess a condition and providing self-care recommendations. This paper describes integrating chatbots into telemedicine systems intended for elderly patient after their hospital discharge. The paper discusses possible ways to utilise chatbots to assist healthcare providers and support patients with their condition.

北京阿比特科技有限公司