亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Energy or time-efficient scheduling is of particular interest in wireless communications, with applications in sensor network design, cellular communications, and more. In many cases, wireless packets to be transmitted have deadlines that upper bound the times before their transmissions, to avoid staleness of transmitted data. In this paper, motivated by emerging applications in security-critical communications, age of information, and molecular communications, we expand the wireless packet scheduling framework to scenarios which involve strict limits on the time after transmission, in addition to the conventional pre-transmission delay constraints. As a result, we introduce the scheduling problem under two-sided individual deadlines, which captures systems wherein transmitting too late (stale) and too early (fresh) are both undesired. Subject to said two-sided deadlines, we provably solve the optimal (energy-minimizing) offline packet scheduling problem. Leveraging this result and the inherent duality between rate and energy, we propose and solve the completion-time-optimal offline packet scheduling problem under the introduced two-sided framework. Overall, the developed theoretical framework can be utilized in applications wherein packets have finite lifetimes both before and after their transmission (e.g., security-critical applications), or applications with joint strict constraints on packet delay and information freshness.

相關內容

CASES:International Conference on Compilers, Architectures, and Synthesis for Embedded Systems。 Explanation:嵌(qian)入式系(xi)(xi)統編譯器、體系(xi)(xi)結構(gou)和綜合國際會議。 Publisher:ACM。 SIT:

Recent advances in the development of large language models are rapidly changing how online applications function. LLM-based search tools, for instance, offer a natural language interface that can accommodate complex queries and provide detailed, direct responses. At the same time, there have been concerns about the veracity of the information provided by LLM-based tools due to potential mistakes or fabrications that can arise in algorithmically generated text. In a set of online experiments we investigate how LLM-based search changes people's behavior relative to traditional search, and what can be done to mitigate overreliance on LLM-based output. Participants in our experiments were asked to solve a series of decision tasks that involved researching and comparing different products, and were randomly assigned to do so with either an LLM-based search tool or a traditional search engine. In our first experiment, we find that participants using the LLM-based tool were able to complete their tasks more quickly, using fewer but more complex queries than those who used traditional search. Moreover, these participants reported a more satisfying experience with the LLM-based search tool. When the information presented by the LLM was reliable, participants using the tool made decisions with a comparable level of accuracy to those using traditional search, however we observed overreliance on incorrect information when the LLM erred. Our second experiment further investigated this issue by randomly assigning some users to see a simple color-coded highlighting scheme to alert them to potentially incorrect or misleading information in the LLM responses. Overall we find that this confidence-based highlighting substantially increases the rate at which users spot incorrect information, improving the accuracy of their overall decisions while leaving most other measures unaffected.

Tests based on heteroskedasticity robust standard errors are an important technique in econometric practice. Choosing the right critical value, however, is not simple at all: conventional critical values based on asymptotics often lead to severe size distortions; and so do existing adjustments including the bootstrap. To avoid these issues, we suggest to use smallest size-controlling critical values, the generic existence of which we prove in this article for the commonly used test statistics. Furthermore, sufficient and often also necessary conditions for their existence are given that are easy to check. Granted their existence, these critical values are the canonical choice: larger critical values result in unnecessary power loss, whereas smaller critical values lead to over-rejections under the null hypothesis, make spurious discoveries more likely, and thus are invalid. We suggest algorithms to numerically determine the proposed critical values and provide implementations in accompanying software. Finally, we numerically study the behavior of the proposed testing procedures, including their power properties.

In this paper, we present a method to encrypt dynamic controllers that can be implemented through most homomorphic encryption schemes, including somewhat, leveled fully, and fully homomorphic encryption. To this end, we represent the output of the given controller as a linear combination of a fixed number of previous inputs and outputs. As a result, the encrypted controller involves only a limited number of homomorphic multiplications on every encrypted data, assuming that the output is re-encrypted and transmitted back from the actuator. A guidance for parameter choice is also provided, ensuring that the encrypted controller achieves predefined performance for an infinite time horizon. Furthermore, we propose a customization of the method for Ring-Learning With Errors (Ring-LWE) based cryptosystems, where a vector of messages can be encrypted into a single ciphertext and operated simultaneously, thus reducing computation and communication loads. Unlike previous results, the proposed customization does not require extra algorithms such as rotation, other than basic addition and multiplication. Simulation results demonstrate the effectiveness of the proposed method.

Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term "frontier AI" models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model's capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of frontier models are needed: (1) standard-setting processes to identify appropriate requirements for frontier AI developers, (2) registration and reporting requirements to provide regulators with visibility into frontier AI development processes, and (3) mechanisms to ensure compliance with safety standards for the development and deployment of frontier AI models. Industry self-regulation is an important first step. However, wider societal discussions and government intervention will be needed to create standards and to ensure compliance with them. We consider several options to this end, including granting enforcement powers to supervisory authorities and licensure regimes for frontier AI models. Finally, we propose an initial set of safety standards. These include conducting pre-deployment risk assessments; external scrutiny of model behavior; using risk assessments to inform deployment decisions; and monitoring and responding to new information about model capabilities and uses post-deployment. We hope this discussion contributes to the broader conversation on how to balance public safety risks and innovation benefits from advances at the frontier of AI development.

Social media and user-generated content (UGC) have become increasingly important features of journalistic work in a number of different ways. However, the growth of misinformation means that news organisations have had devote more and more resources to determining its veracity and to publishing corrections if it is found to be misleading. In this work, we present the results of interviews with eight members of fact-checking teams from two organisations. Team members described their fact-checking processes and the challenges they currently face in completing a fact-check in a robust and timely way. The former reveals, inter alia, significant differences in fact-checking practices and the role played by collaboration between team members. We conclude with a discussion of the implications for the development and application of computational tools, including where computational tool support is currently lacking and the importance of being able to accommodate different fact-checking practices.

Connected and autonomous vehicles (CAVs) can reduce human errors in traffic accidents, increase road efficiency, and execute various tasks ranging from delivery to smart city surveillance. Reaping these benefits requires CAVs to autonomously navigate to target destinations. To this end, each CAV's navigation controller must leverage the information collected by sensors and wireless systems for decision-making on longitudinal and lateral movements. However, enabling autonomous navigation for CAVs requires a convergent integration of communication, control, and learning systems. The goal of this article is to explicitly expose the challenges related to this convergence and propose solutions to address them in two major use cases: Uncoordinated and coordinated CAVs. In particular, challenges related to the navigation of uncoordinated CAVs include stable path tracking, robust control against cyber-physical attacks, and adaptive navigation controller design. Meanwhile, when multiple CAVs coordinate their movements during navigation, fundamental problems such as stable formation, fast collaborative learning, and distributed intrusion detection are analyzed. For both cases, solutions using the convergence of communication theory, control theory, and machine learning are proposed to enable effective and secure CAV navigation. Preliminary simulation results are provided to show the merits of proposed solutions.

Recent breakthroughs in synthetic data generation approaches made it possible to produce highly photorealistic images which are hardly distinguishable from real ones. Furthermore, synthetic generation pipelines have the potential to generate an unlimited number of images. The combination of high photorealism and scale turn synthetic data into a promising candidate for improving various machine learning (ML) pipelines. Thus far, a large body of research in this field has focused on using synthetic images for training, by augmenting and enlarging training data. In contrast to using synthetic data for training, in this work we explore whether synthetic data can be beneficial for model selection. Considering the task of image classification, we demonstrate that when data is scarce, synthetic data can be used to replace the held out validation set, thus allowing to train on a larger dataset. We also introduce a novel method to calibrate the synthetic error estimation to fit that of the real domain. We show that such calibration significantly improves the usefulness of synthetic data for model selection.

In this paper I focused on resource scheduling in the downlink of LTE-Advanced with aggregation of multiple Component Carriers (CCs). When Carrier Aggregation (CA) is applied, a well-designed resource scheduling scheme is essential to the LTE-A system. Joint User Scheduling (JUS), Separated Random User Scheduling (SRUS), Separated Burst-Level Scheduling (SBLS) are three different resource scheduling schemes. JUS is optimal in performance but with high complexity and not considering quality of experience (QoE) parameters. Whereas SRUS and SBLS are contrary and users will acquire few resources because they do not support CA and the system fairness is disappointing. The author propose a novel Carrier Scheduling (CS) scheme, termed as "Quality of Service and Channel Scheduling" (QSCS). Connected CCs of one user can be changed in burst level and these changes are based on checking of services priority and quality of signal that user experiences. Simulation results show that the proposed algorithm can effectively enhance throughput of users like JUS and also it chooses best CCs based on QoS and channel quality parameters. The simulation results also show that achieved QoE is much better than other algorithms.

Federated Learning (FL) has emerged as a decentralized technique, where contrary to traditional centralized approaches, devices perform a model training in a collaborative manner, while preserving data privacy. Despite the existing efforts made in FL, its environmental impact is still under investigation, since several critical challenges regarding its applicability to wireless networks have been identified. Towards mitigating the carbon footprint of FL, the current work proposes a Genetic Algorithm (GA) approach, targeting the minimization of both the overall energy consumption of an FL process and any unnecessary resource utilization, by orchestrating the computational and communication resources of the involved devices, while guaranteeing a certain FL model performance target. A penalty function is introduced in the offline phase of the GA that penalizes the strategies that violate the constraints of the environment, ensuring a safe GA process. Evaluation results show the effectiveness of the proposed scheme compared to two state-of-the-art baseline solutions, achieving a decrease of up to 83% in the total energy consumption.

The hardware computing landscape is changing. What used to be distributed systems can now be found on a chip with highly configurable, diverse, specialized and general purpose units. Such Systems-on-a-Chip (SoC) are used to control today's cyber-physical systems, being the building blocks of critical infrastructures. They are deployed in harsh environments and are connected to the cyberspace, which makes them exposed to both accidental faults and targeted cyberattacks. This is in addition to the changing fault landscape that continued technology scaling, emerging devices and novel application scenarios will bring. In this paper, we discuss how the very features, distributed, parallelized, reconfigurable, heterogeneous, that cause many of the imminent and emerging security and resilience challenges, also open avenues for their cure though SoC replication, diversity, rejuvenation, adaptation, and hybridization. We show how to leverage these techniques at different levels across the entire SoC hardware/software stack, calling for more research on the topic.

北京阿比特科技有限公司