亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The problem of goal-oriented semantic filtering and timely source coding in multiuser communication systems is considered here. We study a distributed monitoring system in which multiple information sources, each observing a physical process, provide status update packets to multiple monitors having heterogeneous goals. Two semantic filtering schemes are first proposed as a means to admit or drop arrival packets based on their goal-dependent importance, which is a function of the intrinsic and extrinsic attributes of information and the probability of occurrence of each realization. Admitted packets at each sensor are then encoded and transmitted over block fading wireless channels so that served monitors can timely fulfill their goals. A truncated error control scheme is derived, which allows transmitters to drop or retransmit undelivered packets based on their significance. Then, we formulate the timely source encoding optimization problem and analytically derive the optimal codeword lengths assigned to the admitted packets which maximize a weighted sum of semantic utility functions for all pairs of communicating sensors and monitors. Our analytical and numerical results provide the optimal design parameters for different arrival rates and highlight the improvement in timely status update delivery using the proposed semantic filtering, source coding, and error control schemes.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · Attention · 可辨認的 · search engine · 因子分解 ·
2023 年 7 月 6 日

Content creators compete for user attention. Their reach crucially depends on algorithmic choices made by developers on online platforms. To maximize exposure, many creators adapt strategically, as evidenced by examples like the sprawling search engine optimization industry. This begets competition for the finite user attention pool. We formalize these dynamics in what we call an exposure game, a model of incentives induced by algorithms, including modern factorization and (deep) two-tower architectures. We prove that seemingly innocuous algorithmic choices, e.g., non-negative vs. unconstrained factorization, significantly affect the existence and character of (Nash) equilibria in exposure games. We proffer use of creator behavior models, like exposure games, for an (ex-ante) pre-deployment audit. Such an audit can identify misalignment between desirable and incentivized content, and thus complement post-hoc measures like content filtering and moderation. To this end, we propose tools for numerically finding equilibria in exposure games, and illustrate results of an audit on the MovieLens and LastFM datasets. Among else, we find that the strategically produced content exhibits strong dependence between algorithmic exploration and content diversity, and between model expressivity and bias towards gender-based user and creator groups.

Motivated by control with communication constraints, in this work we develop a time-invariant data compression architecture for linear-quadratic-Gaussian (LQG) control with minimum bitrate prefix-free feedback. For any fixed control performance, the approach we propose nearly achieves known directed information (DI) lower bounds on the time-average expected codeword length. We refine the analysis of a classical achievability approach, which required quantized plant measurements to be encoded via a time-varying lossless source code. We prove that the sequence of random variables describing the quantizations has a limiting distribution and that the quantizations may be encoded with a fixed source code optimized for this distribution without added time-asymptotic redundancy. Our result follows from analyzing the long-term stochastic behavior of the system, and permits us to additionally guarantee that the time-average codeword length (as opposed to expected length) is almost surely within a few bits of the minimum DI. To our knowledge, this time-invariant achievability result is the first in the literature. The originally published version of the supplementary material included a proof that contained an error that turned out to be inconsequential. This updated preprint corrects this error, which originally appeared under Lemma A.7.

This paper investigates the intelligent computing task-oriented computing offloading and semantic compression in mobile edge computing (MEC) systems. With the popularity of intelligent applications in various industries, terminals increasingly need to offload intelligent computing tasks with complex demands to MEC servers for computing, which is a great challenge for bandwidth and computing capacity allocation in MEC systems. Considering the accuracy requirement of intelligent computing tasks, we formulate an optimization problem of computing offloading and semantic compression. We jointly optimize the system utility which are represented as computing accuracy and task delay respectively to acquire the optimized system utility. To solve the proposed optimization problem, we decompose it into computing capacity allocation subproblem and compression offloading subproblem and obtain solutions through convex optimization and successive convex approximation. After that, the offloading decisions, computing capacity and compressed ratio are obtained in closed forms. We design the computing offloading and semantic compression algorithm for intelligent computing tasks in MEC systems then. Simulation results represent that our algorithm converges quickly and acquires better performance and resource utilization efficiency through the trend with total number of users and computing capacity compared with benchmarks.

Mobile Edge Computing (MEC) is expected to play a significant role in the development of 6G networks, as new applications such as cooperative driving and eXtended Reality (XR) require both communication and computational resources from the network edge. However, the limited capabilities of edge servers may be strained to perform complex computational tasks within strict latency bounds for multiple clients. In these contexts, both maintaining a low average Age of Information (AoI) and guaranteeing a low Peak AoI (PAoI) even in the worst case may have significant user experience and safety implications. In this work, we investigate a theoretical model of a MEC server, deriving the expected AoI and the PAoI and latency distributions under the First In First Out (FIFO) and Generalized Processor Sharing (GPS) resource allocation policies. We consider both synchronized and unsynchronized systems, and draw insights on the robust design of resource allocation policies from the analytical results.

Localizing mobile robotic nodes in indoor and GPS-denied environments is a complex problem, particularly in dynamic, unstructured scenarios where traditional cameras and LIDAR-based sensing and localization modalities may fail. Alternatively, wireless signal-based localization has been extensively studied in the literature yet primarily focuses on fingerprinting and feature-matching paradigms, requiring dedicated environment-specific offline data collection. We propose an online robot localization algorithm enabled by collaborative wireless sensor nodes to remedy these limitations. Our approach's core novelty lies in obtaining the Collaborative Direction of Arrival (CDOA) of wireless signals by exploiting the geometric features and collaboration between wireless nodes. The CDOA is combined with the Expectation Maximization (EM) and Particle Filter (PF) algorithms to calculate the Gaussian probability of the node's location with high efficiency and accuracy. The algorithm relies on RSSI-only data, making it ubiquitous to resource-constrained devices. We theoretically analyze the approach and extensively validate the proposed method's consistency, accuracy, and computational efficiency in simulations, real-world public datasets, as well as real robot demonstrations. The results validate the method's real-time computational capability and demonstrate considerably-high centimeter-level localization accuracy, outperforming relevant state-of-the-art localization approaches.

Upon the advent of the emerging metaverse and its related applications in Augmented Reality (AR), the current bit-oriented network struggles to support real-time changes for the vast amount of associated information, hindering its development. Thus, a critical revolution in the Sixth Generation (6G) networks is envisioned through the joint exploitation of information context and its importance to the task, leading to a communication paradigm shift towards semantic and effectiveness levels. However, current research has not yet proposed any explicit and systematic communication framework for AR applications that incorporate these two levels. To fill this research gap, this paper presents a task-oriented and semantics-aware communication framework for augmented reality (TSAR) to enhance communication efficiency and effectiveness in 6G. Specifically, we first analyse the traditional wireless AR point cloud communication framework and then summarize our proposed semantic information along with the end-to-end wireless communication. We then detail the design blocks of the TSAR framework, covering both semantic and effectiveness levels. Finally, numerous experiments have been conducted to demonstrate that, compared to the traditional point cloud communication framework, our proposed TSAR significantly reduces wireless AR application transmission latency by 95.6%, while improving communication effectiveness in geometry and color aspects by up to 82.4% and 20.4%, respectively.

With recent advancements in technology, the threats of privacy violations of individuals' sensitive data are surging. Location data, in particular, have been shown to carry a substantial amount of sensitive information. A standard method to mitigate the privacy risks for location data consists in adding noise to the true values to achieve geo-indistinguishability (geo-ind). However, geo-ind alone is not sufficient to cover all privacy concerns. In particular, isolated locations are not sufficiently protected by the state-of-the-art Laplace mechanism (LAP) for geo-ind. In this paper, we focus on a mechanism based on the Blahut-Arimoto algorithm (BA) from the rate-distortion theory. We show that BA, in addition to providing geo-ind, enforces an elastic metric that mitigates the problem of isolation. Furthermore, BA provides an optimal trade-off between information leakage and quality of service. We then proceed to study the utility of BA in terms of the statistics that can be derived from the reported data, focusing on the inference of the original distribution. To this purpose, we de-noise the reported data by applying the iterative Bayesian update (IBU), an instance of the expectation-maximization method. It turns out that BA and IBU are dual to each other, and as a result, they work well together, in the sense that the statistical utility of BA is quite good and better than LAP for high privacy levels. Exploiting these properties of BA and IBU, we propose an iterative method, PRIVIC, for a privacy-friendly incremental collection of location data from users by service providers. We illustrate the soundness and functionality of our method both analytically and with experiments.

Over-the-air computation (AirComp), as a data aggregation method that can improve network efficiency by exploiting the superposition characteristics of wireless channels, has received much attention recently. Meanwhile, the orthogonal time frequency space (OTFS) modulation can provide a strong Doppler resilience and facilitates reliable transmission for high-mobility communications. Hence, in this work, we investigate an OTFS-based AirComp system in the presence of time-frequency dual-selective channels. In particular, we commence from the development of a novel transmission framework for the considered system, where the pilot signal is sent together with data and the channel estimation is implemented according to the echo from the access point to the sensor, thereby reducing the overhead of channel state information (CSI) feedback. Hereafter, based on the CSI estimated from the previous frame, a robust precoding matrix aiming at minimizing mean square error in the current frame is designed, which takes into account the estimation error from the receiver noise and the outdated CSI. The simulation results demonstrate the effectiveness of the proposed robust precoding scheme by comparing it with the non-robust precoding. The performance gain is more obvious in high signal-to-noise ratio in case of large channel estimation errors.

Exploiting the computational heterogeneity of mobile devices and edge nodes, mobile edge computation (MEC) provides an efficient approach to achieving real-time applications that are sensitive to information freshness, by offloading tasks from mobile devices to edge nodes. We use the metric Age-of-Information (AoI) to evaluate information freshness. An efficient solution to minimize the AoI for the MEC system with multiple users is non-trivial to obtain due to the random computing time. In this paper, we consider multiple users offloading tasks to heterogeneous edge servers in a MEC system. We first reformulate the problem as a Restless Multi-Arm-Bandit (RMAB) problem and establish a hierarchical Markov Decision Process (MDP) to characterize the updating of AoI for the MEC system. Based on the hierarchical MDP, we propose a nested index framework and design a nested index policy with provably asymptotic optimality. Finally, the closed form of the nested index is obtained, which enables the performance tradeoffs between computation complexity and accuracy. Our algorithm leads to an optimality gap reduction of up to 40%, compared to benchmarks. Our algorithm asymptotically approximates the lower bound as the system scalar gets large enough.

In recent years, mobile devices have gained increasingly development with stronger computation capability and larger storage. Some of the computation-intensive machine learning and deep learning tasks can now be run on mobile devices. To take advantage of the resources available on mobile devices and preserve users' privacy, the idea of mobile distributed machine learning is proposed. It uses local hardware resources and local data to solve machine learning sub-problems on mobile devices, and only uploads computation results instead of original data to contribute to the optimization of the global model. This architecture can not only relieve computation and storage burden on servers, but also protect the users' sensitive information. Another benefit is the bandwidth reduction, as various kinds of local data can now participate in the training process without being uploaded to the server. In this paper, we provide a comprehensive survey on recent studies of mobile distributed machine learning. We survey a number of widely-used mobile distributed machine learning methods. We also present an in-depth discussion on the challenges and future directions in this area. We believe that this survey can demonstrate a clear overview of mobile distributed machine learning and provide guidelines on applying mobile distributed machine learning to real applications.

北京阿比特科技有限公司