亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We propose Breath to Pair (B2P), a protocol for pairing and shared-key generation for wearable devices that leverages the wearer's respiration activity to ensure that the devices are part of the same body-area network. We assume that the devices exploit different types of sensors to extract and process the respiration signal. We illustrate B2P for the case of two devices that use respiratory inductance plethysmography (RIP) and accelerometer sensors, respectively. Allowing for different types of sensors in pairing allows us to include wearable devices that use a variety of different sensors. In practice, this form of sensor variety creates a number of challenges that limit the ability of the shared-key establishment algorithm to generate matching keys. The two main obstacles are the lack of synchronization across the devices and the need for correct noise-induced mismatches between the generated key bit-strings. B2P addresses the synchronization challenge by utilizing Change Point Detection (CPD) to detect abrupt changes in the respiration signal and consider their occurrences as synchronizing points. Any potential mismatches are handled by optimal quantization and encoding of the respiration signal in order to maximize the error correction rate and minimize the message overheads. Extensive evaluation on a dataset collected from 30 volunteers demonstrates that our protocol can generate a secure 256-bit key every 2.85 seconds (around one breathing cycle). Particular attention is given to secure B2P against device impersonation attacks.

相關內容

兩人親密社交應用,官網:

Low-Power Wide-Area Network (LPWAN) is an emerging communication standard for Internet of Things (IoT) that has strong potential to support connectivity of a large number of roadside sensors with an extremely long communication range. However, the high operation cost to manage such a large-scale roadside sensor network remains as a significant challenge. In this article, we propose Low Operation-Cost LPWAN (LOC-LPWAN), a novel optimization framework that is designed to reduce the operation cost using the cross-technology communication (CTC). LOC-LPWAN allows roadside sensors to offload sensor data to passing vehicles that in turn forward the data to a LPWAN server using CTC aiming to reduce the data subscription cost. LOC-LPWAN finds the optimal communication schedule between sensors and vehicles to maximize the throughput given an available budget. Furthermore, LOC-LPWAN optimizes the fairness among sensors by preventing certain sensors from dominating the channel for data transmission. LOC-LPWAN can also be configured to ensure that data packets are received within a specific time bound. Extensive numerical analysis performed with real-world taxi data consisting of 40 vehicles with 24-hour trajectories demonstrate that LOC-LPWAN reduces the cost by 50% compared with the baseline approach where no vehicle is used to relay packets. The results also show that LOC-LPWAN improves the throughput by 72.6%, enhances the fairness by 65.7%, and reduces the delay by 28.8% compared with a greedy algorithm given the same amount of budget.

Recommender systems exploit interaction history to estimate user preference, having been heavily used in a wide range of industry applications. However, static recommendation models are difficult to answer two important questions well due to inherent shortcomings: (a) What exactly does a user like? (b) Why does a user like an item? The shortcomings are due to the way that static models learn user preference, i.e., without explicit instructions and active feedback from users. The recent rise of conversational recommender systems (CRSs) changes this situation fundamentally. In a CRS, users and the system can dynamically communicate through natural language interactions, which provide unprecedented opportunities to explicitly obtain the exact preference of users. Considerable efforts, spread across disparate settings and applications, have been put into developing CRSs. Existing models, technologies, and evaluation methods for CRSs are far from mature. In this paper, we provide a systematic review of the techniques used in current CRSs. We summarize the key challenges of developing CRSs in five directions: (1) Question-based user preference elicitation. (2) Multi-turn conversational recommendation strategies. (3) Dialogue understanding and generation. (4) Exploitation-exploration trade-offs. (5) Evaluation and user simulation. These research directions involve multiple research fields like information retrieval (IR), natural language processing (NLP), and human-computer interaction (HCI). Based on these research directions, we discuss some future challenges and opportunities. We provide a road map for researchers from multiple communities to get started in this area. We hope this survey can help to identify and address challenges in CRSs and inspire future research.

Autonomous systems generate a huge amount of multimodal data that are collected and processed on the Edge, in order to enable AI-based services. The collected datasets are pre-processed in order to extract informative attributes, called features, which are used to feed AI algorithms. Due to the limited computational and communication resources of some CPS, like autonomous vehicles, selecting the subset of relevant features from a dataset is of the utmost importance, in order to improve the result achieved by learning methods and to reduce computation and communication costs. Precisely, feature selection is the candidate approach, which assumes that data contain a certain number of redundant or irrelevant attributes that can be eliminated. The quality of our methods is confirmed by the promising results achieved on two different data sets. In this work, we propose, for the first time, a federated feature selection method suitable for being executed in a distributed manner. Precisely, our results show that a fleet of autonomous vehicles finds a consensus on the optimal set of features that they exploit to reduce data transmission up to 99% with negligible information loss.

There are various cluster validity measures used for evaluating clustering results. One of the main objective of using these measures is to seek the optimal unknown number of clusters. Some measures work well for clusters with different densities, sizes and shapes. Yet, one of the weakness that those validity measures share is that they sometimes provide only one clear optimal number of clusters. That number is actually unknown and there might be more than one potential sub-optimal options that a user may wish to choose based on different applications. We develop two new cluster validity indices based on a correlation between an actual distance between a pair of data points and a centroid distance of clusters that the two points locate in. Our proposed indices constantly yield several peaks at different numbers of clusters which overcome the weakness previously stated. Furthermore, the introduced correlation can also be used for evaluating the quality of a selected clustering result. Several experiments in different scenarios including the well-known iris data set and a real-world marketing application have been conducted in order to compare the proposed validity indices with several well-known ones.

As an essential ingredient of quantum networks, quantum conference key agreement (QCKA) provides unconditional secret keys among multiple parties, which enables only legitimate users to decrypt the encrypted message. Recently, some QCKA protocols employing twin-field was proposed to promote transmission distance. These protocols, however, suffer from relatively low conference key rate and short transmission distance over asymmetric channels, which demands a prompt solution in practice. Here, we consider a tripartite QCKA protocol utilizing the idea of sending-or-not-sending twin-field scheme and propose a high-efficiency QCKA over asymmetric channels by removing the symmetry parameters condition. Besides, we provide a composable finite-key analysis with rigorous security proof against general attacks by exploiting the entropic uncertainty relation for multiparty system. Our protocol greatly improves the feasibility to establish conference keys over asymmetric channels.

Existing Collaborative Filtering (CF) methods are mostly designed based on the idea of matching, i.e., by learning user and item embeddings from data using shallow or deep models, they try to capture the associative relevance patterns in data, so that a user embedding can be matched with relevant item embeddings using designed or learned similarity functions. However, as a cognition rather than a perception intelligent task, recommendation requires not only the ability of pattern recognition and matching from data, but also the ability of cognitive reasoning in data. In this paper, we propose to advance Collaborative Filtering (CF) to Collaborative Reasoning (CR), which means that each user knows part of the reasoning space, and they collaborate for reasoning in the space to estimate preferences for each other. Technically, we propose a Neural Collaborative Reasoning (NCR) framework to bridge learning and reasoning. Specifically, we integrate the power of representation learning and logical reasoning, where representations capture similarity patterns in data from perceptual perspectives, and logic facilitates cognitive reasoning for informed decision making. An important challenge, however, is to bridge differentiable neural networks and symbolic reasoning in a shared architecture for optimization and inference. To solve the problem, we propose a modularized reasoning architecture, which learns logical operations such as AND ($\wedge$), OR ($\vee$) and NOT ($\neg$) as neural modules for implication reasoning ($\rightarrow$). In this way, logical expressions can be equivalently organized as neural networks, so that logical reasoning and prediction can be conducted in a continuous space. Experiments on real-world datasets verified the advantages of our framework compared with both shallow, deep and reasoning models.

The problem of knowledge-based visual question answering involves answering questions that require external knowledge in addition to the content of the image. Such knowledge typically comes in a variety of forms, including visual, textual, and commonsense knowledge. The use of more knowledge sources, however, also increases the chance of retrieving more irrelevant or noisy facts, making it difficult to comprehend the facts and find the answer. To address this challenge, we propose Multi-modal Answer Validation using External knowledge (MAVEx), where the idea is to validate a set of promising answer candidates based on answer-specific knowledge retrieval. This is in contrast to existing approaches that search for the answer in a vast collection of often irrelevant facts. Our approach aims to learn which knowledge source should be trusted for each answer candidate and how to validate the candidate using that source. We consider a multi-modal setting, relying on both textual and visual knowledge resources, including images searched using Google, sentences from Wikipedia articles, and concepts from ConceptNet. Our experiments with OK-VQA, a challenging knowledge-based VQA dataset, demonstrate that MAVEx achieves new state-of-the-art results.

Deep learning has penetrated all aspects of our lives and brought us great convenience. However, the process of building a high-quality deep learning system for a specific task is not only time-consuming but also requires lots of resources and relies on human expertise, which hinders the development of deep learning in both industry and academia. To alleviate this problem, a growing number of research projects focus on automated machine learning (AutoML). In this paper, we provide a comprehensive and up-to-date study on the state-of-the-art AutoML. First, we introduce the AutoML techniques in details according to the machine learning pipeline. Then we summarize existing Neural Architecture Search (NAS) research, which is one of the most popular topics in AutoML. We also compare the models generated by NAS algorithms with those human-designed models. Finally, we present several open problems for future research.

Many vision and language tasks require commonsense reasoning beyond data-driven image and natural language processing. Here we adopt Visual Question Answering (VQA) as an example task, where a system is expected to answer a question in natural language about an image. Current state-of-the-art systems attempted to solve the task using deep neural architectures and achieved promising performance. However, the resulting systems are generally opaque and they struggle in understanding questions for which extra knowledge is required. In this paper, we present an explicit reasoning layer on top of a set of penultimate neural network based systems. The reasoning layer enables reasoning and answering questions where additional knowledge is required, and at the same time provides an interpretable interface to the end users. Specifically, the reasoning layer adopts a Probabilistic Soft Logic (PSL) based engine to reason over a basket of inputs: visual relations, the semantic parse of the question, and background ontological knowledge from word2vec and ConceptNet. Experimental analysis of the answers and the key evidential predicates generated on the VQA dataset validate our approach.

Person re-identification (\textit{re-id}) refers to matching pedestrians across disjoint yet non-overlapping camera views. The most effective way to match these pedestrians undertaking significant visual variations is to seek reliably invariant features that can describe the person of interest faithfully. Most of existing methods are presented in a supervised manner to produce discriminative features by relying on labeled paired images in correspondence. However, annotating pair-wise images is prohibitively expensive in labors, and thus not practical in large-scale networked cameras. Moreover, seeking comparable representations across camera views demands a flexible model to address the complex distributions of images. In this work, we study the co-occurrence statistic patterns between pairs of images, and propose to crossing Generative Adversarial Network (Cross-GAN) for learning a joint distribution for cross-image representations in a unsupervised manner. Given a pair of person images, the proposed model consists of the variational auto-encoder to encode the pair into respective latent variables, a proposed cross-view alignment to reduce the view disparity, and an adversarial layer to seek the joint distribution of latent representations. The learned latent representations are well-aligned to reflect the co-occurrence patterns of paired images. We empirically evaluate the proposed model against challenging datasets, and our results show the importance of joint invariant features in improving matching rates of person re-id with comparison to semi/unsupervised state-of-the-arts.

北京阿比特科技有限公司