亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Over-the-air computation (AirComp) is a known technique in which wireless devices transmit values by analog amplitude modulation so that a function of these values is computed over the communication channel at a common receiver. The physical reason is the superposition properties of the electromagnetic waves, which naturally return sums of analog values. Consequently, the applications of AirComp are almost entirely restricted to analog communication systems. However, the use of digital communications for over-the-air computations would have several benefits, such as error correction, synchronization, acquisition of channel state information, and easier adoption by current digital communication systems. Nevertheless, a common belief is that digital modulations are generally unfeasible for computation tasks because the overlapping of digitally modulated signals returns signals that seem to be meaningless for these tasks. This paper breaks through such a belief and proposes a fundamentally new computing method, named ChannelComp, for performing over-the-air computations by any digital modulation. In particular, we propose digital modulation formats that allow us to compute a wider class of functions than AirComp can compute, and we propose a feasibility optimization problem that ascertains the optimal digital modulation for computing functions over-the-air. The simulation results verify the superior performance of ChannelComp in comparison to AirComp, particularly for the product functions, with around 10 dB improvement of the computation error.

相關內容

The explosive development of the Internet of Things (IoT) has led to increased interest in mobile edge computing (MEC), which provides computational resources at network edges to accommodate computation-intensive and latency-sensitive applications. Intelligent reflecting surfaces (IRSs) have gained attention as a solution to overcome blockage problems during the offloading uplink transmission in MEC systems. This paper explores IRS-aided multi-cell networks that enable servers to serve neighboring cells and cooperate to handle resource exhaustion. We aim to minimize the joint energy and latency cost, by jointly optimizing computation tasks, edge computing resources, user beamforming, and IRS phase shifts. The problem is decomposed into two subproblems--the MEC subproblem and the IRS communication subproblem--using the block coordinate descent (BCD) technique. The MEC subproblem is reformulated as a nonconvex quadratic constrained problem (QCP), while the IRS communication subproblem is transformed into a weight-sum-rate problem with auxiliary variables. We propose an efficient algorithm to iteratively optimize MEC resources and IRS communication until convergence. Numerical results show that our algorithm outperforms benchmarks and that multi-cell MEC systems achieve additional performance gains when supported by IRS.

We present Modular Polynomial (MP) Codes for Secure Distributed Matrix Multiplication (SDMM). The construction is based on the observation that one can decode certain proper subsets of the coefficients of a polynomial with fewer evaluations than is necessary to interpolate the entire polynomial. These codes are proven to outperform, in terms of recovery threshold, the currently best-known polynomial codes for the inner product partition. We also present Generalized Gap Additive Secure Polynomial (GGASP) codes for the grid partition. These two families of codes are shown experimentally to perform favorably in terms of recovery threshold when compared to other comparable polynomials codes for SDMM. Both MP and GGASP codes achieve the recovery threshold of Entangled Polynomial Codes for robustness against stragglers, but MP codes can decode below this recovery threshold depending on the set of worker nodes which fails. The decoding complexity of MP codes is shown to be lower than other approaches in the literature, due to the user not being tasked with interpolating an entire polynomial.

We consider the problem of how a platform designer, owner, or operator can improve the design and operation of a digital platform by leveraging a computational cognitive model that represents users's folk theories about a platform as a sociotechnical system. We do so in the context of Reddit, a social media platform whose owners and administrators make extensive use of shadowbanning, a non-transparent content moderation mechanism that filters a user's posts and comments so that they cannot be seen by fellow community members or the public. After demonstrating that the design and operation of Reddit have led to an abundance of spurious suspicions of shadowbanning in case the mechanism was not in fact invoked, we develop a computational cognitive model of users's folk theories about the antecedents and consequences of shadowbanning that predicts when users will attribute their on-platform observations to a shadowban. The model is then used to evaluate the capacity of interventions available to a platform designer, owner, and operator to reduce the incidence of these false suspicions. We conclude by considering the implications of this approach for the design and operation of digital platforms at large.

Modern society is getting accustomed to the Internet of Things (IoT) and Cyber-Physical Systems (CPS) for a variety of applications that involves security-critical user data and information transfers. In the lower end of the spectrum, these devices are resource-constrained with no attack protection. They become a soft target for malicious code modification attacks that steals and misuses device data in malicious activities. The resilient system requires continuous detection, prevention, and/or recovery and correct code execution (including in degraded mode). By end large, existing security primitives (e.g., secure-boot, Remote Attestation RA, Control Flow Attestation (CFA) and Data Flow Attestation (DFA)) focuses on detection and prevention, leaving the proof of code execution and recovery unanswered. To this end, the proposed work presents lightweight RARES -- Runtime Attack Resilient Embedded System design using verified Proof-of-Execution. It presents first custom hardware control register (Ctrl_register) based runtime memory modification attacks classification and detection technique. It further demonstrates the Proof Of Concept (POC) implementation of use-case-specific attacks prevention and onboard recovery techniques. The prototype implementation on Artix 7 Field Programmable Gate Array (FPGA) and state-of-the-art comparison demonstrates very low (2.3%) resource overhead and efficacy of the proposed solution.

Training and inference with graph neural networks (GNNs) on massive graphs has been actively studied since the inception of GNNs, owing to the widespread use and success of GNNs in applications such as recommendation systems and financial forensics. This paper is concerned with minibatch training and inference with GNNs that employ node-wise sampling in distributed settings, where the necessary partitioning of vertex features across distributed storage causes feature communication to become a major bottleneck that hampers scalability. To significantly reduce the communication volume without compromising prediction accuracy, we propose a policy for caching data associated with frequently accessed vertices in remote partitions. The proposed policy is based on an analysis of vertex-wise inclusion probabilities (VIP) during multi-hop neighborhood sampling, which may expand the neighborhood far beyond the partition boundaries of the graph. VIP analysis not only enables the elimination of the communication bottleneck, but it also offers a means to organize in-memory data by prioritizing GPU storage for the most frequently accessed vertex features. We present SALIENT++, which extends the prior state-of-the-art SALIENT system to work with partitioned feature data and leverages the VIP-driven caching policy. SALIENT++ retains the local training efficiency and scalability of SALIENT by using a deep pipeline and drastically reducing communication volume while consuming only a fraction of the storage required by SALIENT. We provide experimental results with the Open Graph Benchmark data sets and demonstrate that training a 3-layer GraphSAGE model with SALIENT++ on 8 single-GPU machines is 7.1 faster than with SALIENT on 1 single-GPU machine, and 12.7 faster than with DistDGL on 8 single-GPU machines.

Language is constantly changing and evolving, leaving language models to become quickly outdated. Consequently, we should continuously update our models with new data to expose them to new events and facts. However, that requires additional computing, which means new carbon emissions. Do any measurable benefits justify this cost? This paper looks for empirical evidence to support continuous training. We reproduce existing benchmarks and extend them to include additional time periods, models, and tasks. Our results show that the downstream task performance of temporally adapted English models for social media data do not improve over time. Pretrained models without temporal adaptation are actually significantly more effective and efficient. However, we also note a lack of suitable temporal benchmarks. Our findings invite a critical reflection on when and how to temporally adapt language models, accounting for sustainability.

Subset selection tasks, arise in recommendation systems and search engines and ask to select a subset of items that maximize the value for the user. The values of subsets often display diminishing returns, and hence, submodular functions have been used to model them. If the inputs defining the submodular function are known, then existing algorithms can be used. In many applications, however, inputs have been observed to have social biases that reduce the utility of the output subset. Hence, interventions to improve the utility are desired. Prior works focus on maximizing linear functions -- a special case of submodular functions -- and show that fairness constraint-based interventions can not only ensure proportional representation but also achieve near-optimal utility in the presence of biases. We study the maximization of a family of submodular functions that capture functions arising in the aforementioned applications. Our first result is that, unlike linear functions, constraint-based interventions cannot guarantee any constant fraction of the optimal utility for this family of submodular functions. Our second result is an algorithm for submodular maximization. The algorithm provably outputs subsets that have near-optimal utility for this family under mild assumptions and that proportionally represent items from each group. In empirical evaluation, with both synthetic and real-world data, we observe that this algorithm improves the utility of the output subset for this family of submodular functions over baselines.

While underlying the many ways to build strong cooperation settings between regulators and CSOs, this report focuses on making concrete recommendations for the design of an efficient and influential expert group with the European Commission. The creation of an expert group finds its roots in article 64 and recital 137 of the DSA which require the Commission to develop Union expertise and capabilities. Once established, the experts of this group will be able to bring evidence-based information directly to the Commission and specific expertise on the protection of fundamental rights and the safety of users online. By instituting an expert group, the Commission will not only benefit from valuable expert knowledge but will also demonstrate its willingness to put in place an efficient enforcement system based on collective intelligence. Aside from the establishment of an expert group, other cumulative mechanisms will also help the DSA's enforcement to thrive. Civil society organisations should, for instance, consider organising regular crowdsourcing events to deep-dive and analyse the data published by entities covered by the transparency obligations. As it has done in the past, the Commission can sponsor these events and be a direct beneficiary of their results. Another way for civil society organisations to bring information to the Regulator is by legal action, including by making complaints to the regulators.

In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.

Visual recognition is currently one of the most important and active research areas in computer vision, pattern recognition, and even the general field of artificial intelligence. It has great fundamental importance and strong industrial needs. Deep neural networks (DNNs) have largely boosted their performances on many concrete tasks, with the help of large amounts of training data and new powerful computation resources. Though recognition accuracy is usually the first concern for new progresses, efficiency is actually rather important and sometimes critical for both academic research and industrial applications. Moreover, insightful views on the opportunities and challenges of efficiency are also highly required for the entire community. While general surveys on the efficiency issue of DNNs have been done from various perspectives, as far as we are aware, scarcely any of them focused on visual recognition systematically, and thus it is unclear which progresses are applicable to it and what else should be concerned. In this paper, we present the review of the recent advances with our suggestions on the new possible directions towards improving the efficiency of DNN-related visual recognition approaches. We investigate not only from the model but also the data point of view (which is not the case in existing surveys), and focus on three most studied data types (images, videos and points). This paper attempts to provide a systematic summary via a comprehensive survey which can serve as a valuable reference and inspire both researchers and practitioners who work on visual recognition problems.

北京阿比特科技有限公司