亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

A vast population of low-cost low-power transmitters sporadically sending small amounts of data over a common wireless medium is one of the main scenarios for Internet of things (IoT) data communications. At the medium access, the use of grant-free solutions may be preferred to reduce overhead even at the cost of multiple-access interference. Unsourced multiple access (UMA) has been recently established as relevant framework for energy efficient grant-free protocols. The use of a compressed sensing (CS) transmission phase is key in one of the two main classes of UMA protocols, yet little attention has been posed to sparse greedy algorithms as orthogonal matching pursuit (OMP) and its variants. We analyze their performance and provide relevant guidance on how to optimally setup the CS phase. Minimum average transmission power and minimum number of channel uses are investigated together with the performance in terms of receiver operating characteristic (ROC). Interestingly, we show how the basic OMP and generalized OMP (gOMP) are the most competitive algorithms in their class.

相關內容

壓縮感知(zhi)是近年來極為熱門的(de)研究(jiu)前沿(yan),在(zai)若干應用領域中都引起矚目(mu)。 compressive sensing(CS) 又稱 compressived sensing ,compressived sample,大意是在(zai)采集信號(hao)(hao)的(de)時(shi)候(模擬到(dao)數(shu)字(zi)),同時(shi)完(wan)成對信號(hao)(hao)壓縮之意。 與稀疏表示不同,壓縮感知(zhi)關注(zhu)的(de)是如何利用信號(hao)(hao)本身所具有的(de)稀疏性,從部分觀測樣本中恢復原信號(hao)(hao)。

The logic of information flows (LIF) has recently been proposed as a general framework in the field of knowledge representation. In this framework, tasks of procedural nature can still be modeled in a declarative, logic-based fashion. In this paper, we focus on the task of query processing under limited access patterns, a well-studied problem in the database literature. We show that LIF is well-suited for modeling this task. Toward this goal, we introduce a variant of LIF called "forward" LIF (FLIF), in a first-order setting. FLIF takes a novel graph-navigational approach; it is an XPath-like language that nevertheless turns out to be equivalent to the "executable" fragment of first-order logic defined by Nash and Lud\"ascher. One can also classify the variables in FLIF expressions as inputs and outputs. Expressions where inputs and outputs are disjoint, referred to as io-disjoint FLIF expressions, allow a particularly transparent translation into algebraic query plans that respect the access limitations. Finally, we show that general FLIF expressions can always be put into io-disjoint form.

Achieving robust uncertainty quantification for deep neural networks represents an important requirement in many real-world applications of deep learning such as medical imaging where it is necessary to assess the reliability of a neural network's prediction. Bayesian neural networks are a promising approach for modeling uncertainties in deep neural networks. Unfortunately, generating samples from the posterior distribution of neural networks is a major challenge. One significant advance in that direction would be the incorporation of adaptive step sizes, similar to modern neural network optimizers, into Monte Carlo Markov chain sampling algorithms without significantly increasing computational demand. Over the past years, several papers have introduced sampling algorithms with claims that they achieve this property. However, do they indeed converge to the correct distribution? In this paper, we demonstrate that these methods can have a substantial bias in the distribution they sample, even in the limit of vanishing step sizes and at full batch size.

With more scientific fields relying on neural networks (NNs) to process data incoming at extreme throughputs and latencies, it is crucial to develop NNs with all their parameters stored on-chip. In many of these applications, there is not enough time to go off-chip and retrieve weights. Even more so, off-chip memory such as DRAM does not have the bandwidth required to process these NNs as fast as the data is being produced (e.g., every 25 ns). As such, these extreme latency and bandwidth requirements have architectural implications for the hardware intended to run these NNs: 1) all NN parameters must fit on-chip, and 2) codesigning custom/reconfigurable logic is often required to meet these latency and bandwidth constraints. In our work, we show that many scientific NN applications must run fully on chip, in the extreme case requiring a custom chip to meet such stringent constraints.

As the most critical production factor in the era of the digital economy, data will have a significant impact on social production and development. Energy enterprises possess data that is interconnected with multiple industries, characterized by diverse needs, sensitivity, and long-term nature. The path to monetizing energy enterprises' data is challenging yet crucial. This paper explores the game-theoretic aspects of the data monetization process in energy enterprises by considering the relationships between enterprises and trading platforms. We construct a class of game decision models and study their equilibrium strategies. Our analysis shows that enterprises and platforms can adjust respective benefits by regulating the wholesale price of data and the intensity of data value mining to form a benign equilibrium state. Furthermore, by integrating nonlinear dynamical theory, we discuss the dynamic characteristics present in multi-period repeated game processes. We find that decision-makers should keep the adjustment parameters and initial states within reasonable ranges in multi-period dynamic decision-making to avoid market failure. Finally, based on the theoretical and numerical analysis, we provide decision insights and recommendations for enterprise decision-making to facilitate data monetization through strategic interactions with trading platforms.

For the purpose of efficient and cost-effective large-scale data labeling, crowdsourcing is increasingly being utilized. To guarantee the quality of data labeling, multiple annotations need to be collected for each data sample, and truth inference algorithms have been developed to accurately infer the true labels. Despite previous studies having released public datasets to evaluate the efficacy of truth inference algorithms, these have typically focused on a single type of crowdsourcing task and neglected the temporal information associated with workers' annotation activities. These limitations significantly restrict the practical applicability of these algorithms, particularly in the context of long-term and online truth inference. In this paper, we introduce a substantial crowdsourcing annotation dataset collected from a real-world crowdsourcing platform. This dataset comprises approximately two thousand workers, one million tasks, and six million annotations. The data was gathered over a period of approximately six months from various types of tasks, and the timestamps of each annotation were preserved. We analyze the characteristics of the dataset from multiple perspectives and evaluate the effectiveness of several representative truth inference algorithms on this dataset. We anticipate that this dataset will stimulate future research on tracking workers' abilities over time in relation to different types of tasks, as well as enhancing online truth inference.

Knowledge graphs represent factual knowledge about the world as relationships between concepts and are critical for intelligent decision making in enterprise applications. New knowledge is inferred from the existing facts in the knowledge graphs by encoding the concepts and relations into low-dimensional feature vector representations. The most effective representations for this task, called Knowledge Graph Embeddings (KGE), are learned through neural network architectures. Due to their impressive predictive performance, they are increasingly used in high-impact domains like healthcare, finance and education. However, are the black-box KGE models adversarially robust for use in domains with high stakes? This thesis argues that state-of-the-art KGE models are vulnerable to data poisoning attacks, that is, their predictive performance can be degraded by systematically crafted perturbations to the training knowledge graph. To support this argument, two novel data poisoning attacks are proposed that craft input deletions or additions at training time to subvert the learned model's performance at inference time. These adversarial attacks target the task of predicting the missing facts in knowledge graphs using KGE models, and the evaluation shows that the simpler attacks are competitive with or outperform the computationally expensive ones. The thesis contributions not only highlight and provide an opportunity to fix the security vulnerabilities of KGE models, but also help to understand the black-box predictive behaviour of KGE models.

Graph neural networks (GNNs) have demonstrated a significant boost in prediction performance on graph data. At the same time, the predictions made by these models are often hard to interpret. In that regard, many efforts have been made to explain the prediction mechanisms of these models from perspectives such as GNNExplainer, XGNN and PGExplainer. Although such works present systematic frameworks to interpret GNNs, a holistic review for explainable GNNs is unavailable. In this survey, we present a comprehensive review of explainability techniques developed for GNNs. We focus on explainable graph neural networks and categorize them based on the use of explainable methods. We further provide the common performance metrics for GNNs explanations and point out several future research directions.

The world population is anticipated to increase by close to 2 billion by 2050 causing a rapid escalation of food demand. A recent projection shows that the world is lagging behind accomplishing the "Zero Hunger" goal, in spite of some advancements. Socio-economic and well being fallout will affect the food security. Vulnerable groups of people will suffer malnutrition. To cater to the needs of the increasing population, the agricultural industry needs to be modernized, become smart, and automated. Traditional agriculture can be remade to efficient, sustainable, eco-friendly smart agriculture by adopting existing technologies. In this survey paper the authors present the applications, technological trends, available datasets, networking options, and challenges in smart agriculture. How Agro Cyber Physical Systems are built upon the Internet-of-Agro-Things is discussed through various application fields. Agriculture 4.0 is also discussed as a whole. We focus on the technologies, such as Artificial Intelligence (AI) and Machine Learning (ML) which support the automation, along with the Distributed Ledger Technology (DLT) which provides data integrity and security. After an in-depth study of different architectures, we also present a smart agriculture framework which relies on the location of data processing. We have divided open research problems of smart agriculture as future research work in two groups - from a technological perspective and from a networking perspective. AI, ML, the blockchain as a DLT, and Physical Unclonable Functions (PUF) based hardware security fall under the technology group, whereas any network related attacks, fake data injection and similar threats fall under the network research problem group.

Deep neural networks (DNNs) are successful in many computer vision tasks. However, the most accurate DNNs require millions of parameters and operations, making them energy, computation and memory intensive. This impedes the deployment of large DNNs in low-power devices with limited compute resources. Recent research improves DNN models by reducing the memory requirement, energy consumption, and number of operations without significantly decreasing the accuracy. This paper surveys the progress of low-power deep learning and computer vision, specifically in regards to inference, and discusses the methods for compacting and accelerating DNN models. The techniques can be divided into four major categories: (1) parameter quantization and pruning, (2) compressed convolutional filters and matrix factorization, (3) network architecture search, and (4) knowledge distillation. We analyze the accuracy, advantages, disadvantages, and potential solutions to the problems with the techniques in each category. We also discuss new evaluation metrics as a guideline for future research.

With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.

北京阿比特科技有限公司