亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the last two decades, fall detection (FD) systems have been developed as a popular assistive technology. Such systems automatically detect critical fall events and immediately alert medical professionals or caregivers. To support long-term FD services, various power-saving strategies have been implemented. Among them, a reduced sampling rate is a common approach for an energy-efficient system in the real-world. However, the performance of FD systems is diminished owing to low-resolution (LR) accelerometer signals. To improve the detection accuracy with LR accelerometer signals, several technical challenges must be considered, including misalignment, mismatch of effective features, and the degradation effects. In this work, a deep-learning-based accelerometer signal enhancement (ASE) model is proposed to improve the detection performance of LR-FD systems. This proposed model reconstructs high-resolution (HR) signals from the LR signals by learning the relationship between the LR and HR signals. The results show that the FD system using support vector machine and the proposed ASE model at an extremely low sampling rate (sampling rate < 2 Hz) achieved 97.34% and 90.52% accuracies in the SisFall and FallAllD datasets, respectively, while those without ASE models only achieved 95.92% and 87.47% accuracies in the SisFall and FallAllD datasets, respectively. This study demonstrates that the ASE model helps the FD systems tackle the technical challenges of LR signals and achieve better detection performance.

相關內容

第34屆IEEE/ACM自動化軟件工程國際會議(ASE 2019)將于2019年11月11日至15日在圣地亞哥舉行。該會議是自動化軟件工程的首要研究論壇。每年,它匯集了學術界和工業界的研究人員和實踐者,討論自動化、分析、設計、實現、測試和維護大型軟件系統的基礎、技術和工具。 官網鏈接: · Mac · Wi-Fi · 通道 · 學成 ·
2021 年 11 月 17 日

The existing medium access control (MAC) protocol of Wi-Fi networks (i.e., carrier-sense multiple access with collision avoidance (CSMA/CA)) suffers from poor performance in dense deployments due to the increasing number of collisions and long average backoff time in such scenarios. To tackle this issue, we propose an intelligent wireless MAC protocol based on deep learning (DL), referred to as DL-MAC, which significantly improves the spectrum efficiency of Wi-Fi networks. The goal of DL-MAC is to enable not only intelligent channel access but also intelligent rate adaptation. To achieve this goal, we design a deep neural network (DNN) that takes the historical received signal strength indications (RSSIs) as inputs and outputs joint channel access and rate adaptation decision. Notably, the proposed DL-MAC takes the constraints of practical applications into account and the DL-MAC is evaluated using the real wireless data sampled from the actual environments on the 2.4GHz frequency band. The experimental results show that our DL-MAC can achieve around 86\% performance of the global optimal MAC, and around the double performance of the traditional Wi-Fi MAC in the environments of our lab and the Shenzhen Baoan International Airport departure hall.

This letter investigates the reconfigurable intelligent surface (RIS)-aided massive multiple-input multiple-output (MIMO) systems with a two-timescale design. First, the zero-forcing (ZF) detector is applied at the base station (BS) based on instantaneous aggregated CSI, which is the superposition of the direct channel and the cascaded user-RIS-BS channel. Then, by leveraging the channel statistical property, we derive the closed-form ergodic achievable rate expression. Using a gradient ascent method, we design the RIS passive beamforming only relying on the long-term statistical CSI. We prove that the ergodic rate can reap the gains on the order of $\mathcal{O}\left(\log_{2}\left(MN\right)\right)$, where $M$ and $N$ denote the number of BS antennas and RIS elements, respectively. We also prove the striking superiority of the considered RIS-aided system with ZF detectors over the RIS-free systems and RIS-aided systems with maximum-ratio combining (MRC).

Model-based reinforcement learning has attracted wide attention due to its superior sample efficiency. Despite its impressive success so far, it is still unclear how to appropriately schedule the important hyperparameters to achieve adequate performance, such as the real data ratio for policy optimization in Dyna-style model-based algorithms. In this paper, we first theoretically analyze the role of real data in policy training, which suggests that gradually increasing the ratio of real data yields better performance. Inspired by the analysis, we propose a framework named AutoMBPO to automatically schedule the real data ratio as well as other hyperparameters in training model-based policy optimization (MBPO) algorithm, a representative running case of model-based methods. On several continuous control tasks, the MBPO instance trained with hyperparameters scheduled by AutoMBPO can significantly surpass the original one, and the real data ratio schedule found by AutoMBPO shows consistency with our theoretical analysis.

Knowledge of channel state information (CSI) is fundamental to many functionalities within the mobile wireless communications systems. With the advance of machine learning (ML) and digital maps, i.e., digital twins, we have a big opportunity to learn the propagation environment and design novel methods to derive and report CSI. In this work, we propose to combine untrained neural networks (UNNs) and conditional generative adversarial networks (cGANs) for MIMO channel recreation based on prior knowledge. The UNNs learn the prior-CSI for some locations which are used to build the input to a cGAN. Based on the prior-CSIs, their locations and the location of the desired channel, the cGAN is trained to output the channel expected at the desired location. This combined approach can be used for low overhead CSI reporting as, after training, we only need to report the desired location. Our results show that our method is successful in modelling the wireless channel and robust to location quantization errors in line of sight conditions.

Cell-Free Massive Multiple-input Multiple-output (mMIMO) consists of many access points (APs) in a coverage area that jointly serve the users. These systems can significantly reduce the interference among the users compared to conventional MIMO networks and so enable higher data rates and a larger coverage area. However, Cell-Free mMIMO systems face multiple practical challenges such as the high complexity and power consumption of the APs' analog front-ends. Motivated by prior works, we address these issues by considering a low complexity hybrid beamforming framework at the APs in which each AP has a limited number of RF-chains to reduce power consumption, and the analog combiner is designed only using the large-scale statistics of the channel to reduce the system's complexity. We provide closed-form expressions for the signal to interference and noise ratio (SINR) of both uplink and downlink data transmission with accurate random matrix approximations. Also, based on the existing literature, we provide a power optimization algorithm that maximizes the minimum SINR of the users for uplink scenario. Through several simulations, we investigate the accuracy of the derived random matrix approximations, trade-off between the 95% outage data rate and the number of RF-chains, and the impact of power optimization. We observe that the derived approximations accurately follow the exact simulations and that in uplink scenario while using MMSE combiner, power optimization does not improve the performance much.

Remote examination and job interviews have gained popularity and become indispensable because of both pandemics and the advantage of remote working circumstances. Most companies and academic institutions utilize these systems for their recruitment processes and also for online exams. However, one of the critical problems of the remote examination systems is conducting the exams in a reliable environment. In this work, we present a cheating analysis pipeline for online interviews and exams. The system only requires a video of the candidate, which is recorded during the exam. Then cheating detection pipeline is employed to detect another person, electronic device usage, and candidate absence status. The pipeline consists of face detection, face recognition, object detection, and face tracking algorithms. To evaluate the performance of the pipeline we collected a private video dataset. The video dataset includes both cheating activities and clean videos. Ultimately, our pipeline presents an efficient and fast guideline to detect and analyze cheating activities in an online interview and exam video.

Cognitive diagnosis is a fundamental issue in intelligent education, which aims to discover the proficiency level of students on specific knowledge concepts. Existing approaches usually mine linear interactions of student exercising process by manual-designed function (e.g., logistic function), which is not sufficient for capturing complex relations between students and exercises. In this paper, we propose a general Neural Cognitive Diagnosis (NeuralCD) framework, which incorporates neural networks to learn the complex exercising interactions, for getting both accurate and interpretable diagnosis results. Specifically, we project students and exercises to factor vectors and leverage multi neural layers for modeling their interactions, where the monotonicity assumption is applied to ensure the interpretability of both factors. Furthermore, we propose two implementations of NeuralCD by specializing the required concepts of each exercise, i.e., the NeuralCDM with traditional Q-matrix and the improved NeuralCDM+ exploring the rich text content. Extensive experimental results on real-world datasets show the effectiveness of NeuralCD framework with both accuracy and interpretability.

Deep learning has been successfully applied to solve various complex problems ranging from big data analytics to computer vision and human-level control. Deep learning advances however have also been employed to create software that can cause threats to privacy, democracy and national security. One of those deep learning-powered applications recently emerged is "deepfake". Deepfake algorithms can create fake images and videos that humans cannot distinguish them from authentic ones. The proposal of technologies that can automatically detect and assess the integrity of digital visual media is therefore indispensable. This paper presents a survey of algorithms used to create deepfakes and, more importantly, methods proposed to detect deepfakes in the literature to date. We present extensive discussions on challenges, research trends and directions related to deepfake technologies. By reviewing the background of deepfakes and state-of-the-art deepfake detection methods, this study provides a comprehensive overview of deepfake techniques and facilitates the development of new and more robust methods to deal with the increasingly challenging deepfakes.

Transfer learning is one of the subjects undergoing intense study in the area of machine learning. In object recognition and object detection there are known experiments for the transferability of parameters, but not for neural networks which are suitable for object-detection in real time embedded applications, such as the SqueezeDet neural network. We use transfer learning to accelerate the training of SqueezeDet to a new group of classes. Also, experiments are conducted to study the transferability and co-adaptation phenomena introduced by the transfer learning process. To accelerate training, we propose a new implementation of the SqueezeDet training which provides a faster pipeline for data processing and achieves $1.8$ times speedup compared to the initial implementation. Finally, we created a mechanism for automatic hyperparamer optimization using an empirical method.

This paper introduces an online model for object detection in videos designed to run in real-time on low-powered mobile and embedded devices. Our approach combines fast single-image object detection with convolutional long short term memory (LSTM) layers to create an interweaved recurrent-convolutional architecture. Additionally, we propose an efficient Bottleneck-LSTM layer that significantly reduces computational cost compared to regular LSTMs. Our network achieves temporal awareness by using Bottleneck-LSTMs to refine and propagate feature maps across frames. This approach is substantially faster than existing detection methods in video, outperforming the fastest single-frame models in model size and computational cost while attaining accuracy comparable to much more expensive single-frame models on the Imagenet VID 2015 dataset. Our model reaches a real-time inference speed of up to 15 FPS on a mobile CPU.

北京阿比特科技有限公司