亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The flexibility and reconfigurability at the radio frequency (RF) front-end offered by the fluid antenna system (FAS) make this technology promising for providing remarkable diversity gains in networks with small and constrained devices. Toward this direction, this letter compares the outage probability (OP) performance of non-diversity and diversity FAS receivers undergoing spatially correlated Nakagami-$m$ fading channels. Although the system properties of FAS incur in complex analysis, we derive a simple yet accurate closed-form approximation by relying on a novel asymptotic matching method for the OP of a maximum-gain combining-FAS (MGC-FAS). The approximation is performed in two stages, the approximation of the cumulative density function (CDF) of each MGC-FAS branch, and then the approximation of the end-to-end CDF of the MGC-FAS scheme. With these results, closed-form expressions for the OP and the asymptotic OP are derived. Finally, numerical results validate our approximation of the MGC-FAS scheme and demonstrate its accuracy under different diversity FAS scenarios.

相關內容

Confidence intervals (CI) for the IPW estimators of the ATT and ATO might not always yield conservative CIs when using the 'robust sandwich variance' estimator. In this manuscript, we identify scenarios where this variance estimator can be employed to derive conservative CIs. Specifically, for the ATT, a conservative CI can be derived when there's a homogeneous treatment effect or the interaction effect surpasses the effect from the covariates alone. For the ATO, conservative CIs can be derived under certain conditions, such as when there are homogeneous treatment effects, when there exists significant treatment-confounder interactions, or when there's a large number of members in the control groups.

Privacy is a central challenge for systems that learn from sensitive data sets, especially when a system's outputs must be continuously updated to reflect changing data. We consider the achievable error for differentially private continual release of a basic statistic -- the number of distinct items -- in a stream where items may be both inserted and deleted (the turnstile model). With only insertions, existing algorithms have additive error just polylogarithmic in the length of the stream $T$. We uncover a much richer landscape in the turnstile model, even without considering memory restrictions. We show that every differentially private mechanism that handles insertions and deletions has worst-case additive error at least $T^{1/4}$ even under a relatively weak, event-level privacy definition. Then, we identify a parameter of the input stream, its maximum flippancy, that is low for natural data streams and for which we give tight parameterized error guarantees. Specifically, the maximum flippancy is the largest number of times that the contribution of a single item to the distinct elements count changes over the course of the stream. We present an item-level differentially private mechanism that, for all turnstile streams with maximum flippancy $w$, continually outputs the number of distinct elements with an $O(\sqrt{w} \cdot poly\log T)$ additive error, without requiring prior knowledge of $w$. We prove that this is the best achievable error bound that depends only on $w$, for a large range of values of $w$. When $w$ is small, the error of our mechanism is similar to the polylogarithmic in $T$ error in the insertion-only setting, bypassing the hardness in the turnstile model.

Many real-time applications of the Internet of Things (IoT) need to deal with correlated information generated by multiple sensors. The design of efficient status update strategies that minimize the Age of Correlated Information (AoCI) is a key factor. In this paper, we consider an IoT network consisting of sensors equipped with the energy harvesting (EH) capability. We optimize the average AoCI at the data fusion center (DFC) by appropriately managing the energy harvested by sensors, whose true battery states are unobservable during the decision-making process. Particularly, we first formulate the dynamic status update procedure as a partially observable Markov decision process (POMDP), where the environmental dynamics are unknown to the DFC. In order to address the challenges arising from the causality of energy usage, unknown environmental dynamics, unobservability of sensors'true battery states, and large-scale discrete action space, we devise a deep reinforcement learning (DRL)-based dynamic status update algorithm. The algorithm leverages the advantages of the soft actor-critic and long short-term memory techniques. Meanwhile, it incorporates our proposed action decomposition and mapping mechanism. Extensive simulations are conducted to validate the effectiveness of our proposed algorithm by comparing it with available DRL algorithms for POMDPs.

Agents built with large language models (LLMs) have recently achieved great advancements. However, most of the efforts focus on single-agent or cooperative settings, leaving more general multi-agent environments underexplored. We propose a new framework powered by reinforcement learning (RL) to develop strategic language agents, i.e., LLM-based agents with strategic thinking ability, for a popular language game, Werewolf. Werewolf is a social deduction game with hidden roles that involves both cooperation and competition and emphasizes deceptive communication and diverse gameplay. Our agent tackles this game by first using LLMs to reason about potential deceptions and generate a set of strategically diverse actions. Then an RL policy, which selects an action from the candidates, is learned by population-based training to enhance the agents' decision-making ability. By combining LLMs with the RL policy, our agent produces a variety of emergent strategies, achieves the highest win rate against other LLM-based agents, and stays robust against adversarial human players in the Werewolf game.

We analyze low-power short-range wireless communications through a low-rank fading channel - a bonafide use case in many communication scenarios requiring simple wireless connectivity with much relaxed constraints on throughput and data latency. This is certainly true, for instance, in low-complexity wireless channels in the low-rate wireless personal area networks (LR-WPANs). Low-rate communication on control channels in wireless networks is another relevant example. Specifically, we characterize the capacity of a low-rank wireless channel with varying fading severity at low signal-to-noise ratios (SNRs). The rank deficiency is incorporated by introducing pinhole condition in the channel. The channel capacity degradation with fading severity at high SNRs is well known: the probability of deep fades increases significantly with higher fading severity resulting in poor performance. Our analysis of the double-fading pinhole channel at low-SNR shows a very counter-intuitive result that - \emph{higher fading severity enables higher capacity at sufficiently low SNR}. The underlying reason is that at low SNRs, ergodic capacity depends crucially on the probability distribution of channel peaks (simply tail distribution); for the pinhole channel, the tail distribution improves with increased fading severity. This allows a transmitter operating at low SNR to exploit channel peaks `more efficiently' resulting in net improvement in achievable spectral efficiency. We derive a new key result quantifying the above dependence for the double-Nakagami-$m$ fading pinhole channel - that is, the ergodic capacity ${C} \propto (m_T m_R)^{-1}$ at low SNR, where $m_T m_R$ is the product of fading (severity) parameters of the two independent Nakagami-$m$ fadings involved.

Beamforming is a signal processing technique where an array of antenna elements can be steered to transmit and receive radio signals in a specific direction. The usage of millimeter wave (mmWave) frequencies and multiple input multiple output (MIMO) beamforming are considered as the key innovations of 5th Generation (5G) and beyond communication systems. The technique initially performs a beam alignment procedure, followed by data transfer in the aligned directions between the transmitter and the receiver. Traditionally, beam alignment involves periodical and exhaustive beam sweeping at both transmitter and the receiver, which is a slow process causing extra communication overhead with MIMO and massive MIMO radio units. In applications such as beam tracking, angular velocity, beam steering etc., the beam alignment procedure is optimized by estimating the beam directions using first order polynomial approximations. Recent learning-based SOTA strategies for fast mmWave beam alignment also require exploration over exhaustive beam pairs during the training procedure, causing overhead to learning strategies for higher antenna configurations. In this work, we first optimize the beam alignment cost functions e.g. the data rate, to reduce the beam sweeping overhead by applying polynomial approximations of its partial derivatives which can then be solved as a system of polynomial equations using well-known tools from algebraic geometry. At this point, a question arises: 'what is a good polynomial approximation?' In this work, we attempt to obtain a 'good polynomial approximation'. Preliminary experiments indicate that our estimated polynomial approximations attain a so-called sweet-spot in terms of the solver speed and accuracy, when evaluated on test beamforming problems.

Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data. Typical GCN and its variants work under a homophily assumption (i.e., nodes with same class are prone to connect to each other), while ignoring the heterophily which exists in many real-world networks (i.e., nodes with different classes tend to form edges). Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations, which leads to noise and irrelevant information in the result. But these methods did not change the propagation mechanism which works under homophily assumption (that is a fundamental part of GCNs). This makes it difficult to distinguish the representation of nodes from different classes. To address this problem, in this paper we design a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily between node pairs. To adaptively learn the propagation process, we introduce two measurements of homophily degree between node pairs, which is learned based on topological and attribute information, respectively. Then we incorporate the learnable homophily degree into the graph convolution framework, which is trained in an end-to-end schema, enabling it to go beyond the assumption of homophily. More importantly, we theoretically prove that our model can constrain the similarity of representations between nodes according to their homophily degree. Experiments on seven real-world datasets demonstrate that this new approach outperforms the state-of-the-art methods under heterophily or low homophily, and gains competitive performance under homophily.

The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.

Deep neural networks have revolutionized many machine learning tasks in power systems, ranging from pattern recognition to signal processing. The data in these tasks is typically represented in Euclidean domains. Nevertheless, there is an increasing number of applications in power systems, where data are collected from non-Euclidean domains and represented as the graph-structured data with high dimensional features and interdependency among nodes. The complexity of graph-structured data has brought significant challenges to the existing deep neural networks defined in Euclidean domains. Recently, many studies on extending deep neural networks for graph-structured data in power systems have emerged. In this paper, a comprehensive overview of graph neural networks (GNNs) in power systems is proposed. Specifically, several classical paradigms of GNNs structures (e.g., graph convolutional networks, graph recurrent neural networks, graph attention networks, graph generative networks, spatial-temporal graph convolutional networks, and hybrid forms of GNNs) are summarized, and key applications in power systems such as fault diagnosis, power prediction, power flow calculation, and data generation are reviewed in detail. Furthermore, main issues and some research trends about the applications of GNNs in power systems are discussed.

High spectral dimensionality and the shortage of annotations make hyperspectral image (HSI) classification a challenging problem. Recent studies suggest that convolutional neural networks can learn discriminative spatial features, which play a paramount role in HSI interpretation. However, most of these methods ignore the distinctive spectral-spatial characteristic of hyperspectral data. In addition, a large amount of unlabeled data remains an unexploited gold mine for efficient data use. Therefore, we proposed an integration of generative adversarial networks (GANs) and probabilistic graphical models for HSI classification. Specifically, we used a spectral-spatial generator and a discriminator to identify land cover categories of hyperspectral cubes. Moreover, to take advantage of a large amount of unlabeled data, we adopted a conditional random field to refine the preliminary classification results generated by GANs. Experimental results obtained using two commonly studied datasets demonstrate that the proposed framework achieved encouraging classification accuracy using a small number of data for training.

北京阿比特科技有限公司