亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In recent years, there has been a significant increase in attention towards designing incentive mechanisms for federated learning (FL). Tremendous existing studies attempt to design the solutions using various approaches (e.g., game theory, reinforcement learning) under different settings. Yet the design of incentive mechanism could be significantly biased in that clients' performance in many applications is stochastic and hard to estimate. Properly handling this stochasticity motivates this research, as it is not well addressed in pioneering literature. In this paper, we focus on cross-device FL and propose a multi-level FL architecture under the real scenarios. Considering the two properties of clients' situations: uncertainty, correlation, we propose FL Incentive Mechanism based on Portfolio theory (FL-IMP). As far as we are aware, this is the pioneering application of portfolio theory to incentive mechanism design aimed at resolving FL resource allocation problem. In order to more accurately reflect practical FL scenarios, we introduce the Federated Learning Agent-Based Model (FL-ABM) as a means of simulating autonomous clients. FL-ABM enables us to gain a deeper understanding of the factors that influence the system's outcomes. Experimental evaluations of our approach have extensively validated its effectiveness and superior performance in comparison to the benchmark methods.

相關內容

In recent times, there has been a growing focus on end-to-end autonomous driving technologies. This technology involves the replacement of the entire driving pipeline with a single neural network, which has a simpler structure and faster inference time. However, while this approach reduces the number of components in the driving pipeline, it also presents challenges related to interpretability and safety. For instance, the trained policy may not always comply with traffic rules, and it is difficult to determine the reason for such misbehavior due to the lack of intermediate outputs. Additionally, the successful implementation of autonomous driving technology heavily depends on the reliable and expedient processing of sensory data to accurately perceive the surrounding environment. In this paper, we provide penalty-based imitation learning approach combined with cross semantics generation sensor fusion technologies (P-CSG) to efficiently integrate multiple modalities of information and enable the autonomous agent to effectively adhere to traffic regulations. Our model undergoes evaluation within the Town 05 Long benchmark, where we observe a remarkable increase in the driving score by more than 12% when compared to the state-of-the-art (SOTA) model, InterFuser. Notably, our model achieves this performance enhancement while achieving a 7-fold increase in inference speed and reducing the model size by approximately 30%. For more detailed information, including code-based resources, they can be found at //hk-zh.github.io/p-csg/

As the current detection solutions of distributed denial of service attacks (DDoS) need additional infrastructures to handle high aggregate data rates, they are not suitable for sensor networks or the Internet of Things. Besides, the security architecture of software-defined sensor networks needs to pay attention to the vulnerabilities of both software-defined networks and sensor networks. In this paper, we propose a network-aware automated machine learning (AutoML) framework which detects DDoS attacks in software-defined sensor networks. Our framework selects an ideal machine learning algorithm to detect DDoS attacks in network-constrained environments, using metrics such as variable traffic load, heterogeneous traffic rate, and detection time while preventing over-fitting. Our contributions are two-fold: (i) we first investigate the trade-off between the efficiency of ML algorithms and network/traffic state in the scope of DDoS detection. (ii) we design and implement a software architecture containing open-source network tools, with the deployment of multiple ML algorithms. Lastly, we show that under the denial of service attacks, our framework ensures the traffic packets are still delivered within the network with additional delays.

Existing distributed denial of service attack (DDoS) solutions cannot handle highly aggregated data rates; thus, they are unsuitable for Internet service provider (ISP) core networks. This article proposes a digital twin-enabled intelligent DDoS detection mechanism using an online learning method for autonomous systems. Our contributions are three-fold: we first design a DDoS detection architecture based on the digital twin for ISP core networks. We implemented a Yet Another Next Generation (YANG) model and an automated feature selection (AutoFS) module to handle core network data. We used an online learning approach to update the model instantly and efficiently, improve the learning model quickly, and ensure accurate predictions. Finally, we reveal that our proposed solution successfully detects DDoS attacks and updates the feature selection method and learning model with a true classification rate of ninety-seven percent. Our proposed solution can estimate the attack within approximately fifteen minutes after the DDoS attack starts.

Accurate estimation of conditional average treatment effects (CATE) is at the core of personalized decision making. While there is a plethora of models for CATE estimation, model selection is a nontrivial task, due to the fundamental problem of causal inference. Recent empirical work provides evidence in favor of proxy loss metrics with double robust properties and in favor of model ensembling. However, theoretical understanding is lacking. Direct application of prior theoretical work leads to suboptimal oracle model selection rates due to the non-convexity of the model selection problem. We provide regret rates for the major existing CATE ensembling approaches and propose a new CATE model ensembling approach based on Q-aggregation using the doubly robust loss. Our main result shows that causal Q-aggregation achieves statistically optimal oracle model selection regret rates of $\frac{\log(M)}{n}$ (with $M$ models and $n$ samples), with the addition of higher-order estimation error terms related to products of errors in the nuisance functions. Crucially, our regret rate does not require that any of the candidate CATE models be close to the truth. We validate our new method on many semi-synthetic datasets and also provide extensions of our work to CATE model selection with instrumental variables and unobserved confounding.

In recent years, digital humans have been widely applied in augmented/virtual reality (A/VR), where viewers are allowed to freely observe and interact with the volumetric content. However, the digital humans may be degraded with various distortions during the procedure of generation and transmission. Moreover, little effort has been put into the perceptual quality assessment of digital humans. Therefore, it is urgent to carry out objective quality assessment methods to tackle the challenge of digital human quality assessment (DHQA). In this paper, we develop a novel no-reference (NR) method based on Transformer to deal with DHQA in a multi-task manner. Specifically, the front 2D projections of the digital humans are rendered as inputs and the vision transformer (ViT) is employed for the feature extraction. Then we design a multi-task module to jointly classify the distortion types and predict the perceptual quality levels of digital humans. The experimental results show that the proposed method well correlates with the subjective ratings and outperforms the state-of-the-art quality assessment methods.

Foundational identity systems (FIDS) have been used to optimise service delivery and inclusive economic growth in developing countries. As developing nations increasingly seek to use FIDS for the identification and authentication of identity (ID) holders, trustworthy interoperability will help to develop a cross-border dimension of e-Government. Despite this potential, there has not been any significant research on the interoperability of FIDS in the African identity ecosystem. There are several challenges to this; on one hand, complex internal political dynamics have resulted in weak institutions, implying that FIDS could be exploited for political gains. On the other hand, the trust in the government by the citizens or ID holders is habitually low, in which case, data security and privacy protection concerns become paramount. In the same sense, some FIDS are technology-locked, thus interoperability is primarily ambiguous. There are also issues of cross-system compatibility, legislation, vendor-locked system design principles and unclear regulatory provisions for data sharing. Fundamentally, interoperability is an essential prerequisite for e-Government services and underpins optimal service delivery in education, social security, and financial services including gender and equality as already demonstrated by the European Union. Furthermore, cohesive data exchange through an interoperable identity system will create an ecosystem of efficient data governance and the integration of cross-border FIDS. Consequently, this research identifies the challenges, opportunities, and requirements for cross-border interoperability in an African context. Our findings show that interoperability in the African identity ecosystem is vital to strengthen the seamless authentication and verification of ID holders for inclusive economic growth and widen the dimensions of e-Government across the continent.

A common concern when a policymaker draws causal inferences from and makes decisions based on observational data is that the measured covariates are insufficiently rich to account for all sources of confounding, i.e., the standard no confoundedness assumption fails to hold. The recently proposed proximal causal inference framework shows that proxy variables that abound in real-life scenarios can be leveraged to identify causal effects and therefore facilitate decision-making. Building upon this line of work, we propose a novel optimal individualized treatment regime based on so-called outcome and treatment confounding bridges. We then show that the value function of this new optimal treatment regime is superior to that of existing ones in the literature. Theoretical guarantees, including identification, superiority, excess value bound, and consistency of the estimated regime, are established. Furthermore, we demonstrate the proposed optimal regime via numerical experiments and a real data application.

In large-scale industrial e-commerce, the efficiency of an online recommendation system is crucial in delivering highly relevant item/content advertising that caters to diverse business scenarios. However, most existing studies focus solely on item advertising, neglecting the significance of content advertising. This oversight results in inconsistencies within the multi-entity structure and unfair retrieval. Furthermore, the challenge of retrieving top-k advertisements from multi-entity advertisements across different domains adds to the complexity. Recent research proves that user-entity behaviors within different domains exhibit characteristics of differentiation and homogeneity. Therefore, the multi-domain matching models typically rely on the hybrid-experts framework with domain-invariant and domain-specific representations. Unfortunately, most approaches primarily focus on optimizing the combination mode of different experts, failing to address the inherent difficulty in optimizing the expert modules themselves. The existence of redundant information across different domains introduces interference and competition among experts, while the distinct learning objectives of each domain lead to varying optimization challenges among experts. To tackle these issues, we propose robust representation learning for the unified online top-k recommendation. Our approach constructs unified modeling in entity space to ensure data fairness. The robust representation learning employs domain adversarial learning and multi-view wasserstein distribution learning to learn robust representations. Moreover, the proposed method balances conflicting objectives through the homoscedastic uncertainty weights and orthogonality constraints. Various experiments validate the effectiveness and rationality of our proposed method, which has been successfully deployed online to serve real business scenarios.

Advances in artificial intelligence often stem from the development of new environments that abstract real-world situations into a form where research can be done conveniently. This paper contributes such an environment based on ideas inspired by elementary Microeconomics. Agents learn to produce resources in a spatially complex world, trade them with one another, and consume those that they prefer. We show that the emergent production, consumption, and pricing behaviors respond to environmental conditions in the directions predicted by supply and demand shifts in Microeconomics. We also demonstrate settings where the agents' emergent prices for goods vary over space, reflecting the local abundance of goods. After the price disparities emerge, some agents then discover a niche of transporting goods between regions with different prevailing prices -- a profitable strategy because they can buy goods where they are cheap and sell them where they are expensive. Finally, in a series of ablation experiments, we investigate how choices in the environmental rewards, bartering actions, agent architecture, and ability to consume tradable goods can either aid or inhibit the emergence of this economic behavior. This work is part of the environment development branch of a research program that aims to build human-like artificial general intelligence through multi-agent interactions in simulated societies. By exploring which environment features are needed for the basic phenomena of elementary microeconomics to emerge automatically from learning, we arrive at an environment that differs from those studied in prior multi-agent reinforcement learning work along several dimensions. For example, the model incorporates heterogeneous tastes and physical abilities, and agents negotiate with one another as a grounded form of communication.

Collaborative filtering often suffers from sparsity and cold start problems in real recommendation scenarios, therefore, researchers and engineers usually use side information to address the issues and improve the performance of recommender systems. In this paper, we consider knowledge graphs as the source of side information. We propose MKR, a Multi-task feature learning approach for Knowledge graph enhanced Recommendation. MKR is a deep end-to-end framework that utilizes knowledge graph embedding task to assist recommendation task. The two tasks are associated by cross&compress units, which automatically share latent features and learn high-order interactions between items in recommender systems and entities in the knowledge graph. We prove that cross&compress units have sufficient capability of polynomial approximation, and show that MKR is a generalized framework over several representative methods of recommender systems and multi-task learning. Through extensive experiments on real-world datasets, we demonstrate that MKR achieves substantial gains in movie, book, music, and news recommendation, over state-of-the-art baselines. MKR is also shown to be able to maintain a decent performance even if user-item interactions are sparse.

北京阿比特科技有限公司