亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Immersiveness is the main characteristic of Virtual Reality(VR) applications. Precise integration between hardware design and software are necessary for providing a seamless virtual experience. Allowing the user to navigate the VR scene using locomotion techniques is crucial for making such experiences `immersive'. Locomotion in VR acts as a motion tracking unit for the user and simulates their movement in the virtual scene. These movements are commonly rotational, axial or translational based on the Degree-of-Freedom (DOF) of the application. To support effective locomotion, one of the primary challenges for VR practitioners is to transform their hardware from 3-DOF to 6-DOF or vice versa. We conducted a systematic review on different motion tracking methods employed in the Head-Mounted-Devices (HMD) to understand such hardware transformation. Our review discusses the fundamental aspects of the hardware-based transformation of HMDs to conduct virtual locomotion. Our observations led us to formulate a taxonomy of the tracking methods based on system design, which can eventually be used for the hardware transformation of HMDs. Our study also captures different metrics that VR practitioners use to evaluate the hardware based on the context, performance, and significance of its usage.

相關內容

Chase-Pyndiah decoding is widely used for decoding product codes. However, this method is suboptimal and requires scaling the soft information exchanged during the iterative processing. In this paper, we propose a framework for obtaining the scaling coefficients based on maximizing the generalized mutual information. Our approach yields gains up to 0.11 dB for product codes with two-error correcting extended BCH component codes over the binary-input additive white Gaussian noise channel compared to the original Chase-Pyndiah decoder with heuristically obtained coefficients. We also introduce an extrinsic version of the Chase-Pyndiah decoder and associate product codes with a turbo-like code ensemble to derive a Monte Carlo-based density evolution analysis. The resulting iterative decoding thresholds accurately predict the onset of the waterfall region.

We investigate the emergent abilities of the recently proposed web-scale speech model Whisper, by adapting it to unseen tasks with prompt engineering. We selected three tasks: audio-visual speech recognition (AVSR), code-switched speech recognition (CS-ASR), and speech translation (ST) on unseen language pairs. We design task-specific prompts, by either leveraging another large-scale model, or simply manipulating the special tokens in the default prompts. Experiments show that compared to the default prompts, our proposed prompts improve performance by 10% to 45% on the three zero-shot tasks, and even outperform SotA supervised models on some datasets. In addition, our experiments reveal many interesting properties of Whisper, including its robustness to prompts, bias on accents, and the multilingual understanding in its latent space. Code is available at //github.com/jasonppy/PromptingWhisper

RF data-driven device fingerprinting through the use of deep learning has recently surfaced as a potential solution for automated network access authentication. Traditional approaches are commonly susceptible to the domain adaptation problem where a model trained on data from one domain performs badly when tested on data from a different domain. Some examples of a domain change include varying the device location or environment and varying the time or day of data collection. In this work, we propose using multifractal analysis and the variance fractal dimension trajectory (VFDT) as a data representation input to the deep neural network to extract device fingerprints that are domain generalizable. We analyze the effectiveness of the proposed VFDT representation in detecting device-specific signatures from hardware-impaired IQ signals, and evaluate its robustness in real-world settings, using an experimental testbed of 30 WiFi-enabled Pycom devices under different locations and at different scales. Our results show that the VFDT representation improves the scalability, robustness and generalizability of the deep learning models significantly compared to when using raw IQ data.

The importance of preventing microarchitectural timing side channels in security-critical applications has surged in recent years. Constant-time programming has emerged as a best-practice technique for preventing the leakage of secret information through timing. It is based on the assumption that the timing of certain basic machine instructions is independent of their respective input data. However, whether or not an instruction satisfies this data-independent timing criterion varies between individual processor microarchitectures. In this paper, we propose a novel methodology to formally verify data-oblivious behavior in hardware using standard property checking techniques. The proposed methodology is based on an inductive property that enables scalability even to complex out-of-order cores. We show that proving this inductive property is sufficient to exhaustively verify data-obliviousness at the microarchitectural level. In addition, the paper discusses several techniques that can be used to make the verification process easier and faster. We demonstrate the feasibility of the proposed methodology through case studies on several open-source designs. One case study uncovered a data-dependent timing violation in the extensively verified and highly secure IBEX RISC-V core. In addition to several hardware accelerators and in-order processors, our experiments also include RISC-V BOOM, a complex out-of-order processor, highlighting the scalability of the approach.

Function-as-a-Service is a cloud computing paradigm offering an event-driven execution model to applications. It features serverless attributes by eliminating resource management responsibilities from developers and offers transparent and on-demand scalability of applications. Typical serverless applications have stringent response time and scalability requirements and therefore rely on deployed services to provide quick and fault-tolerant feedback to clients. However, the FaaS paradigm suffers from cold starts as there is a non-negligible delay associated with on-demand function initialization. This work focuses on reducing the frequency of cold starts on the platform by using Reinforcement Learning. Our approach uses Q-learning and considers metrics such as function CPU utilization, existing function instances, and response failure rate to proactively initialize functions in advance based on the expected demand. The proposed solution was implemented on Kubeless and was evaluated using a normalised real-world function demand trace with matrix multiplication as the workload. The results demonstrate a favourable performance of the RL-based agent when compared to Kubeless' default policy and function keep-alive policy by improving throughput by up to 8.81% and reducing computation load and resource wastage by up to 55% and 37%, respectively, which is a direct outcome of reduced cold starts.

As an effective strategy, data augmentation (DA) alleviates data scarcity scenarios where deep learning techniques may fail. It is widely applied in computer vision then introduced to natural language processing and achieves improvements in many tasks. One of the main focuses of the DA methods is to improve the diversity of training data, thereby helping the model to better generalize to unseen testing data. In this survey, we frame DA methods into three categories based on the diversity of augmented data, including paraphrasing, noising, and sampling. Our paper sets out to analyze DA methods in detail according to the above categories. Further, we also introduce their applications in NLP tasks as well as the challenges.

Federated Learning (FL) is a decentralized machine-learning paradigm, in which a global server iteratively averages the model parameters of local users without accessing their data. User heterogeneity has imposed significant challenges to FL, which can incur drifted global models that are slow to converge. Knowledge Distillation has recently emerged to tackle this issue, by refining the server model using aggregated knowledge from heterogeneous users, other than directly averaging their model parameters. This approach, however, depends on a proxy dataset, making it impractical unless such a prerequisite is satisfied. Moreover, the ensemble knowledge is not fully utilized to guide local model learning, which may in turn affect the quality of the aggregated model. Inspired by the prior art, we propose a data-free knowledge distillation} approach to address heterogeneous FL, where the server learns a lightweight generator to ensemble user information in a data-free manner, which is then broadcasted to users, regulating local training using the learned knowledge as an inductive bias. Empirical studies powered by theoretical implications show that, our approach facilitates FL with better generalization performance using fewer communication rounds, compared with the state-of-the-art.

Vast amount of data generated from networks of sensors, wearables, and the Internet of Things (IoT) devices underscores the need for advanced modeling techniques that leverage the spatio-temporal structure of decentralized data due to the need for edge computation and licensing (data access) issues. While federated learning (FL) has emerged as a framework for model training without requiring direct data sharing and exchange, effectively modeling the complex spatio-temporal dependencies to improve forecasting capabilities still remains an open problem. On the other hand, state-of-the-art spatio-temporal forecasting models assume unfettered access to the data, neglecting constraints on data sharing. To bridge this gap, we propose a federated spatio-temporal model -- Cross-Node Federated Graph Neural Network (CNFGNN) -- which explicitly encodes the underlying graph structure using graph neural network (GNN)-based architecture under the constraint of cross-node federated learning, which requires that data in a network of nodes is generated locally on each node and remains decentralized. CNFGNN operates by disentangling the temporal dynamics modeling on devices and spatial dynamics on the server, utilizing alternating optimization to reduce the communication cost, facilitating computations on the edge devices. Experiments on the traffic flow forecasting task show that CNFGNN achieves the best forecasting performance in both transductive and inductive learning settings with no extra computation cost on edge devices, while incurring modest communication cost.

Data transmission between two or more digital devices in industry and government demands secure and agile technology. Digital information distribution often requires deployment of Internet of Things (IoT) devices and Data Fusion techniques which have also gained popularity in both, civilian and military environments, such as, emergence of Smart Cities and Internet of Battlefield Things (IoBT). This usually requires capturing and consolidating data from multiple sources. Because datasets do not necessarily originate from identical sensors, fused data typically results in a complex Big Data problem. Due to potentially sensitive nature of IoT datasets, Blockchain technology is used to facilitate secure sharing of IoT datasets, which allows digital information to be distributed, but not copied. However, blockchain has several limitations related to complexity, scalability, and excessive energy consumption. We propose an approach to hide information (sensor signal) by transforming it to an image or an audio signal. In one of the latest attempts to the military modernization, we investigate sensor fusion approach by investigating the challenges of enabling an intelligent identification and detection operation and demonstrates the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application for specific hand gesture alert system from wearable devices.

For better user experience and business effectiveness, Click-Through Rate (CTR) prediction has been one of the most important tasks in E-commerce. Although extensive CTR prediction models have been proposed, learning good representation of items from multimodal features is still less investigated, considering an item in E-commerce usually contains multiple heterogeneous modalities. Previous works either concatenate the multiple modality features, that is equivalent to giving a fixed importance weight to each modality; or learn dynamic weights of different modalities for different items through technique like attention mechanism. However, a problem is that there usually exists common redundant information across multiple modalities. The dynamic weights of different modalities computed by using the redundant information may not correctly reflect the different importance of each modality. To address this, we explore the complementarity and redundancy of modalities by considering modality-specific and modality-invariant features differently. We propose a novel Multimodal Adversarial Representation Network (MARN) for the CTR prediction task. A multimodal attention network first calculates the weights of multiple modalities for each item according to its modality-specific features. Then a multimodal adversarial network learns modality-invariant representations where a double-discriminators strategy is introduced. Finally, we achieve the multimodal item representations by combining both modality-specific and modality-invariant representations. We conduct extensive experiments on both public and industrial datasets, and the proposed method consistently achieves remarkable improvements to the state-of-the-art methods. Moreover, the approach has been deployed in an operational E-commerce system and online A/B testing further demonstrates the effectiveness.

北京阿比特科技有限公司