亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Due to increasing demands of seamless connection and massive information exchange across the world, the integrated satellite-terrestrial communication systems develop rapidly. To shed lights on the design of this system, we consider an uplink communication model consisting of a single satellite, a single terrestrial station and multiple ground users. The terrestrial station uses decode-and-forward (DF) to facilitate the communication between ground users and the satellite. The channel between the satellite and the terrestrial station is assumed to be a quasi-static shadowed Rician fading channel, while the channels between the terrestrial station and ground users are assumed to experience independent quasi-static Rayleigh fading. We consider two cases of channel state information (CSI) availability. When instantaneous CSI is available, we derive the instantaneous achievable sum rate of all ground users and formulate an optimization problem to maximize the sum rate. When only channel distribution information (CDI) is available, we derive a closed-form expression for the outage probability and formulate another optimization problem to minimize the outage probability. Both optimization problems correspond to scheduling algorithms for ground users. For both cases, we propose low-complexity user scheduling algorithms and demonstrate the efficiency of our scheduling algorithms via numerical simulations.

相關內容

《計算機信息》雜志發表高質量的論文,擴大了運籌學和計算的范圍,尋求有關理論、方法、實驗、系統和應用方面的原創研究論文、新穎的調查和教程論文,以及描述新的和有用的軟件工具的論文。官網鏈接: · 代碼 · 通道 · INFORMS · Performer ·
2022 年 12 月 8 日

Coding schemes for several problems in network information theory are constructed starting from point-to-point channel codes that are designed for symmetric channels. Given that the point-to-point codes satisfy certain properties pertaining to the rate, the error probability, and the distribution of decoded sequences, bounds on the performance of the coding schemes are derived and shown to hold irrespective of other properties of the codes. In particular, we consider the problems of lossless and lossy source coding, Slepian--Wolf coding, Wyner--Ziv coding, Berger--Tung coding, multiple description coding, asymmetric channel coding, Gelfand--Pinsker coding, coding for multiple access channels, Marton coding for broadcast channels, and coding for cloud radio access networks (C-RAN's). We show that the coding schemes can achieve the best known inner bounds for most of these problems, provided that the constituent point-to-point channel codes are rate-optimal. This would allow one to leverage commercial off-the-shelf codes for point-to-point symmetric channels in the practical implementation of codes over networks. Simulation results demonstrate the gain of the proposed coding schemes compared to existing practical solutions to these problems.

Cellular-connected unmanned aerial vehicle (UAV) has attracted a surge of research interest in both academia and industry. To support aerial user equipment (UEs) in the existing cellular networks, one promising approach is to assign a portion of the system bandwidth exclusively to the UAV-UEs. This is especially favorable for use cases where a large number of UAV-UEs are exploited, e.g., for package delivery close to a warehouse. Although the nearly line-of-sight (LoS) channels can result in higher powers received, UAVs can in turn cause severe interference to each other in the same frequency band. In this contribution, we focus on the uplink communications of massive cellular-connected UAVs. Different power allocation algorithms are proposed to either maximize the minimal spectrum efficiency (SE) or maximize the overall SE to cope with severe interference based on the successive convex approximation (SCA) principle. One of the challenges is that a UAV can affect a large area meaning that many more UAV-UEs must be considered in the optimization problem, which is essentially different from that for terrestrial UEs. The necessity of single-carrier uplink transmission further complicates the problem. Nevertheless, we find that the special property of large coherent bandwidths and coherent times of the propagation channels can be leveraged. The performances of the proposed algorithms are evaluated via extensive simulations in the full-buffer transmission mode and bursty-traffic mode. Results show that the proposed algorithms can effectively enhance the uplink SEs. This work can be considered the first attempt to deal with the interference among massive cellular-connected UAV-UEs with optimized power allocations.

The mathematical approaches for modeling dynamic traffic can roughly be divided into two categories: discrete packet routing models and continuous flow over time models. Despite very vital research activities on models in both categories, the connection between these approaches was poorly understood so far. In this work we build this connection by specifying a (competitive) packet routing model, which is discrete in terms of flow and time, and by proving its convergence to the intensively studied model of flows over time with deterministic queuing. More precisely, we prove that the limit of the convergence process, when decreasing the packet size and time step length in the packet routing model, constitutes a flow over time with multiple commodities. In addition, we show that the convergence result implies the existence of approximate equilibria in the competitive version of the packet routing model. This is of significant interest as exact pure Nash equilibria, similar to almost all other competitive models, cannot be guaranteed in the multi-commodity setting. Moreover, the introduced packet routing model with deterministic queuing is very application-oriented as it is based on the network loading module of the agent-based transport simulation MATSim. As the present work is the first mathematical formalization of this simulation, it provides a theoretical foundation and an environment for provable mathematical statements for MATSim.

Most of the existing signcryption schemes generate pseudonym by key generation center (KGC) and usually choose bilinear pairing to construct authentication schemes. The drawback is that these schemes not only consume heavy computation and communication costs during information exchange, but also can not eliminate security risks due to not updating pseudonym, which do not work well for resource-constrained smart terminals in cyber-physical power systems (CPPSs). The main objective of this paper is to propose a novel efficient signcryption scheme for resource-constrained smart terminals. First, a dynamical pseudonym self-generation mechanism (DPSGM) is explored to achieve privacy preservation and avoid the source being linked. Second, combined with DPSGM, an efficient signcryption scheme based on certificateless cryptography (CLC) and elliptic curve cryptography (ECC) is designed, which reduces importantly computation and communication burden. Furthermore, under random oracle model (ROM), the confidentiality and non-repudiation of the proposed signcryption scheme are transformed into elliptic curve discrete logarithm and computational Diffie-Hellman problems that cannot be solved in polynomial time, which guarantees the security. Finally, the effectiveness and feasibility of the proposed signcryption scheme are confirmed by experimental analyses.

Cardiovascular disease is one of the leading causes of death according to WHO. Phonocardiography (PCG) is a costeffective, non-invasive method suitable for heart monitoring. The main aim of this work is to classify heart sounds into normal/abnormal categories. Heart sounds are recorded using different stethoscopes, thus varying in the domain. Based on recent studies, this variability can affect heart sound classification. This work presents a Siamese network architecture for learning the similarity between normal vs. normal or abnormal vs. abnormal signals and the difference between normal vs. abnormal signals. By applying this similarity and difference learning across all domains, the task of domain invariant heart sound classification can be well achieved. We have used the multi-domain 2016 Physionet/CinC challenge dataset for the evaluation method. Results: On the evaluation set provided by the challenge, we have achieved a sensitivity of 82.8%, specificity of 75.3%, and mean accuracy of 79.1%. While overcoming the multi-domain problem, the proposed method has surpassed the first-place method of the Physionet challenge in terms of specificity up to 10.9% and mean accuracy up to 5.6%. Also, compared with similar state-of-the-art domain invariant methods, our model converges faster and performs better in specificity (4.1%) and mean accuracy (1.5%) with an equal number of epochs learned.

Decentralized bilevel optimization has received increasing attention recently due to its foundational role in many emerging multi-agent learning paradigms (e.g., multi-agent meta-learning and multi-agent reinforcement learning) over peer-to-peer edge networks. However, to work with the limited computation and communication capabilities of edge networks, a major challenge in developing decentralized bilevel optimization techniques is to lower sample and communication complexities. This motivates us to develop a new decentralized bilevel optimization called DIAMOND (decentralized single-timescale stochastic approximation with momentum and gradient-tracking). The contributions of this paper are as follows: i) our DIAMOND algorithm adopts a single-loop structure rather than following the natural double-loop structure of bilevel optimization, which offers low computation and implementation complexity; ii) compared to existing approaches, the DIAMOND algorithm does not require any full gradient evaluations, which further reduces both sample and computational complexities; iii) through a careful integration of momentum information and gradient tracking techniques, we show that the DIAMOND algorithm enjoys $\mathcal{O}(\epsilon^{-3/2})$ in sample and communication complexities for achieving an $\epsilon$-stationary solution, both of which are independent of the dataset sizes and significantly outperform existing works. Extensive experiments also verify our theoretical findings.

Emerging real-time multi-model ML (RTMM) workloads such as AR/VR and drone control often involve dynamic behaviors in various levels; task, model, and layers (or, ML operators) within a model. Such dynamic behaviors are new challenges to the system software in an ML system because the overall system load is unpredictable unlike traditional ML workloads. Also, the real-time processing requires to meet deadlines, and multi-model workloads involve highly heterogeneous models. As RTMM workloads often run on resource-constrained devices (e.g., VR headset), developing an effective scheduler is an important research problem. Therefore, we propose a new scheduler, SDRM3, that effectively handles various dynamicity in RTMM style workloads targeting multi-accelerator systems. To make scheduling decisions, SDRM3 quantifies the unique requirements for RTMM workloads and utilizes the quantified scores to drive scheduling decisions, considering the current system load and other inference jobs on different models and input frames. SDRM3 has tunable parameters that provide fast adaptivity to dynamic workload changes based on a gradient descent-like online optimization, which typically converges within five steps for new workloads. In addition, we also propose a method to exploit model level dynamicity based on Supernet for exploiting the trade-off between the scheduling effectiveness and model performance (e.g., accuracy), which dynamically selects a proper sub-network in a Supernet based on the system loads. In our evaluation on five realistic RTMM workload scenarios, SDRM3 reduces the overall UXCost, which is a energy-delay-product (EDP)-equivalent metric for real-time applications defined in the paper, by 37.7% and 53.2% on geometric mean (up to 97.6% and 97.1%) compared to state-of-the-art baselines, which shows the efficacy of our scheduling methodology.

This paper focuses on a stochastic system identification problem: given time series observations of a stochastic differential equation (SDE) driven by L\'{e}vy $\alpha$-stable noise, estimate the SDE's drift field. For $\alpha$ in the interval $[1,2)$, the noise is heavy-tailed, leading to computational difficulties for methods that compute transition densities and/or likelihoods in physical space. We propose a Fourier space approach that centers on computing time-dependent characteristic functions, i.e., Fourier transforms of time-dependent densities. Parameterizing the unknown drift field using Fourier series, we formulate a loss consisting of the squared error between predicted and empirical characteristic functions. We minimize this loss with gradients computed via the adjoint method. For a variety of one- and two-dimensional problems, we demonstrate that this method is capable of learning drift fields in qualitative and/or quantitative agreement with ground truth fields.

It has been shown that deep neural networks are prone to overfitting on biased training data. Towards addressing this issue, meta-learning employs a meta model for correcting the training bias. Despite the promising performances, super slow training is currently the bottleneck in the meta learning approaches. In this paper, we introduce a novel Faster Meta Update Strategy (FaMUS) to replace the most expensive step in the meta gradient computation with a faster layer-wise approximation. We empirically find that FaMUS yields not only a reasonably accurate but also a low-variance approximation of the meta gradient. We conduct extensive experiments to verify the proposed method on two tasks. We show our method is able to save two-thirds of the training time while still maintaining the comparable or achieving even better generalization performance. In particular, our method achieves the state-of-the-art performance on both synthetic and realistic noisy labels, and obtains promising performance on long-tailed recognition on standard benchmarks.

Since deep neural networks were developed, they have made huge contributions to everyday lives. Machine learning provides more rational advice than humans are capable of in almost every aspect of daily life. However, despite this achievement, the design and training of neural networks are still challenging and unpredictable procedures. To lower the technical thresholds for common users, automated hyper-parameter optimization (HPO) has become a popular topic in both academic and industrial areas. This paper provides a review of the most essential topics on HPO. The first section introduces the key hyper-parameters related to model training and structure, and discusses their importance and methods to define the value range. Then, the research focuses on major optimization algorithms and their applicability, covering their efficiency and accuracy especially for deep learning networks. This study next reviews major services and toolkits for HPO, comparing their support for state-of-the-art searching algorithms, feasibility with major deep learning frameworks, and extensibility for new modules designed by users. The paper concludes with problems that exist when HPO is applied to deep learning, a comparison between optimization algorithms, and prominent approaches for model evaluation with limited computational resources.

北京阿比特科技有限公司