亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

This paper proposes a cell-free massive multiple-input multiple-output (CF-mMIMO) architecture with joint list-based detection with soft interference cancelation (soft-IC) and access points (APs) selection. In particular, we derive a new closed-form expression for the minimum mean-square error receive filter while taking the uplink transmit powers and APs selection into account. This is achieved by optimizing the receive combining vector by minimizing the mean square error between the detected symbol estimate and transmitted symbol, after canceling the multi-user interference (MUI). By using low-density parity check (LDPC) codes, an iterative detection and decoding (IDD) scheme based on a message passing is devised. In order to perform joint detection at the central processing unit (CPU), the access points locally estimate the channel and send their received sample data to the CPU via the front haul links. In order to enhance the system's bit error rate performance, the detected symbols are iteratively exchanged between the joint detector and the LDPC decoder in log likelihood ratio form. Furthermore, we draw insights into the derived detector as the number of IDD iterations increase. Finally, the proposed list detector is compared with existing detection techniques.

相關內容

The fundamental diagram serves as the foundation of traffic flow modeling for almost a century. With the increasing availability of road sensor data, deterministic parametric models have proved inadequate in describing the variability of real-world data, especially in congested area of the density-flow diagram. In this paper we estimate the stochastic density-flow relation introducing a nonparametric method called convex quantile regression. The proposed method does not depend on any prior functional form assumptions, but thanks to the concavity constraints, the estimated function satisfies the theoretical properties of the fundamental diagram. The second contribution is to develop the new convex quantile regression with bags (CQRb) approach to facilitate practical implementation of CQR to the real-world data. We illustrate the CQRb estimation process using the road sensor data from Finland in years 2016-2018. Our third contribution is to demonstrate the excellent out-of-sample predictive power of the proposed CQRb method in comparison to the standard parametric deterministic approach.

The continuous advancement of photorealism in rendering is accompanied by a growth in texture data and, consequently, increasing storage and memory demands. To address this issue, we propose a novel neural compression technique specifically designed for material textures. We unlock two more levels of detail, i.e., 16x more texels, using low bitrate compression, with image quality that is better than advanced image compression techniques, such as AVIF and JPEG XL. At the same time, our method allows on-demand, real-time decompression with random access similar to block texture compression on GPUs, enabling compression on disk and memory. The key idea behind our approach is compressing multiple material textures and their mipmap chains together, and using a small neural network, that is optimized for each material, to decompress them. Finally, we use a custom training implementation to achieve practical compression speeds, whose performance surpasses that of general frameworks, like PyTorch, by an order of magnitude.

Elitism, which constructs the new population by preserving best solutions out of the old population and newly-generated solutions, has been a default way for population update since its introduction into multi-objective evolutionary algorithms (MOEAs) in the late 1990s. In this paper, we take an opposite perspective to conduct the population update in MOEAs by simply discarding elitism. That is, we treat the newly-generated solutions as the new population directly (so that all selection pressure comes from mating selection). We propose a simple non-elitist MOEA (called NE-MOEA) that only uses Pareto dominance sorting to compare solutions, without involving any diversity-related selection criterion. Preliminary experimental results show that NE-MOEA can compete with well-known elitist MOEAs (NSGA-II, SMS-EMOA and NSGA-III) on several combinatorial problems. Lastly, we discuss limitations of the proposed non-elitist algorithm and suggest possible future research directions.

Background: Despite a growing body of literature on the impact of software bots on open source software development teams, their effects on team communication, coordination, and collaboration practices are not well understood. Bots can have negative consequences, such as producing information overload or reducing interactions between developers. Objective: The objective of this study is to investigate the effect of specific GitHub Actions on the collaboration networks of Open Source Software teams, using a network-analytic approach. The study will focus on Code Review bots, one of the most frequently implemented types of bots. Method: Fine-grained, time-stamped data of co-editing networks, developer-file contribution networks, as well as workflow runs and git commit logs will be obtained from a large sample of GitHub repositories. This will allow us to study how bots affect the collaboration networks of developers over time. By using a more representative sample of GitHub repositories than previous studies, which includes projects whose sizes span the whole range of Open Source communities, this study will provide generalizable results and updated findings on the general usage and distribution of GitHub Actions. With this study, we aim to contribute to advancing our knowledge of human-bot interaction and the effects of support tools on software engineering teams.

The configuration model is a standard tool for uniformly generating random graphs with a specified degree sequence, and is often used as a null model to evaluate how much of an observed network's structure can be explained by its degree structure alone. A Markov chain Monte Carlo (MCMC) algorithm, based on a degree-preserving double-edge swap, provides an asymptotic solution to sample from the configuration model. However, accurately and efficiently detecting this Markov chain's convergence on its stationary distribution remains an unsolved problem. Here, we provide a solution to detect convergence and sample from the configuration model. We develop an algorithm, based on the assortativity of the sampled graphs, for estimating the gap between effectively independent MCMC states, and a computationally efficient gap-estimation heuristic derived from analyzing a corpus of 509 empirical networks. We provide a convergence detection method based on the Dickey-Fuller Generalized Least Squares test, which we show is more accurate and efficient than three alternative Markov chain convergence tests.

Influence operations are large-scale efforts to manipulate public opinion. The rapid detection and disruption of these operations is critical for healthy public discourse. Emergent AI technologies may enable novel operations which evade current detection methods and influence public discourse on social media with greater scale, reach, and specificity. New methods with inductive learning capacity will be needed to identify these novel operations before they indelibly alter public opinion and events. We develop an inductive learning framework which: 1) determines content- and graph-based indicators that are not specific to any operation; 2) uses graph learning to encode abstract signatures of coordinated manipulation; and 3) evaluates generalization capacity by training and testing models across operations originating from Russia, China, and Iran. We find that this framework enables strong cross-operation generalization while also revealing salient indicators$\unicode{x2013}$illustrating a generic approach which directly complements transductive methodologies, thereby enhancing detection coverage.

Accurately quantifying and removing submerged underwater waste plays a crucial role in safeguarding marine life and preserving the environment. While detecting floating and surface debris is relatively straightforward, quantifying submerged waste presents significant challenges due to factors like light refraction, absorption, suspended particles, and color distortion. This paper addresses these challenges by proposing the development of a custom dataset and an efficient detection approach for submerged marine debris. The dataset encompasses diverse underwater environments and incorporates annotations for precise labeling of debris instances. Ultimately, the primary objective of this custom dataset is to enhance the diversity of litter instances and improve their detection accuracy in deep submerged environments by leveraging state-of-the-art deep learning architectures.

In recent times, I've encountered a principle known as cloud computing, a model that simplifies user access to data and computing power on a demand basis. The main objective of cloud computing is to accommodate users' growing needs by decreasing dependence on human resources, minimizing expenses, and enhancing the speed of data access. Nevertheless, preserving security and privacy in cloud computing systems pose notable challenges. This issue arises because these systems have a distributed structure, which is susceptible to unsanctioned access - a fundamental problem. In the context of cloud computing, the provision of services on demand makes them targets for common assaults like Denial of Service (DoS) attacks, which include Economic Denial of Sustainability (EDoS) and Distributed Denial of Service (DDoS). These onslaughts can be classified into three categories: bandwidth consumption attacks, specific application attacks, and connection layer attacks. Most of the studies conducted in this arena have concentrated on a singular type of attack, with the concurrent detection of multiple DoS attacks often overlooked. This article proposes a suitable method to identify four types of assaults: HTTP, Database, TCP SYN, and DNS Flood. The aim is to present a universal algorithm that performs effectively in detecting all four attacks instead of using separate algorithms for each one. In this technique, seventeen server parameters like memory usage, CPU usage, and input/output counts are extracted and monitored for changes, identifying the failure point using the CUSUM algorithm to calculate the likelihood of each attack. Subsequently, a fuzzy neural network is employed to determine the occurrence of an attack. When compared to the Snort software, the proposed method's results show a significant improvement in the average detection rate, jumping from 57% to 95%.

Trusted execution environments in several existing and upcoming CPUs demonstrate the success of confidential computing, with the caveat that tenants cannot use accelerators such as GPUs and FPGAs. If the accelerators have TEE support, the user-code executing on the CPU in a confidential VM has to rely on software-based encryption to facilitate communication between VMs and accelerators. Even after hardware changes to enable TEEs on both sides and software changes to adopt existing code to leverage these features, it results in redundant data copies and hardware encryption at the bus-level and on the accelerator thus degrading the performance and defeating the purpose of using accelerators. In this paper, we reconsider the Arm Confidential Computing Architecture (CCA) design-an upcoming TEE feature in Arm v9-to address this gap. We observe that CCA offers the right abstraction and mechanisms to allow confidential VM to use accelerators as a first class abstraction, while relying on the hardware-based memory protection to preserve security. We build Acai, a CCA-based solution, to demonstrate the feasibility of our approach without changes to hardware or software on the CPU and the accelerator. Our experimental results on GPU and FPGA show that Acai can achieve strong security guarantees with low performance overheads.

Bid optimization for online advertising from single advertiser's perspective has been thoroughly investigated in both academic research and industrial practice. However, existing work typically assume competitors do not change their bids, i.e., the wining price is fixed, leading to poor performance of the derived solution. Although a few studies use multi-agent reinforcement learning to set up a cooperative game, they still suffer the following drawbacks: (1) They fail to avoid collusion solutions where all the advertisers involved in an auction collude to bid an extremely low price on purpose. (2) Previous works cannot well handle the underlying complex bidding environment, leading to poor model convergence. This problem could be amplified when handling multiple objectives of advertisers which are practical demands but not considered by previous work. In this paper, we propose a novel multi-objective cooperative bid optimization formulation called Multi-Agent Cooperative bidding Games (MACG). MACG sets up a carefully designed multi-objective optimization framework where different objectives of advertisers are incorporated. A global objective to maximize the overall profit of all advertisements is added in order to encourage better cooperation and also to protect self-bidding advertisers. To avoid collusion, we also introduce an extra platform revenue constraint. We analyze the optimal functional form of the bidding formula theoretically and design a policy network accordingly to generate auction-level bids. Then we design an efficient multi-agent evolutionary strategy for model optimization. Offline experiments and online A/B tests conducted on the Taobao platform indicate both single advertiser's objective and global profit have been significantly improved compared to state-of-art methods.

北京阿比特科技有限公司