亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We consider a base station (BS) that receives version update packets from multiple exogenous streams and broadcasts them to corresponding users over a fading broadcast channel using a non-orthogonal multiple access (NOMA) scheme. Sequentially indexed packets arrive randomly in each stream, with new packets making the previous ones obsolete. In this case, we consider the version age of information (VAoI) at a user, defined as the difference in the version index of the latest available packet at the BS and that at the user, as a metric of freshness of information. Our objective is to minimize a weighted sum of average VAoI across users subject to an average power constraint at the BS by optimally scheduling the update packets from various streams for transmission and transmitting them with sufficient powers to guarantee their successful delivery. We consider the class of channel-only stationary randomized policies (CO-SRP), which rely solely on channel power gains for transmission decisions. We solve the resulting non-convex problem optimally and show that the VAoI achieved under the optimal CO-SRP is within twice the optimal achievable VAoI. We also obtained a Constrained Markov Decision Process (CMDP)-based solution and its structural properties. Numerical simulations show a close performance between the optimal CO-SRP and CMDP-based solutions. Additionally, a time division multiple access (TDMA) scheme, which allows transmission to at most one user at a time, matches NOMA's performance under tight average power constraints. However, NOMA outperforms TDMA as the constraint is relaxed.

相關內容

We present a novel approach that aims to address both safety and stability of a haptic teleoperation system within a framework of Haptic Shared Autonomy (HSA). We use Control Barrier Functions (CBFs) to generate the control input that follows the user's input as closely as possible while guaranteeing safety. In the context of stability of the human-in-the-loop system, we limit the force feedback perceived by the user via a small $L_2$-gain, which is achieved by limiting the control and the force feedback via a differential constraint. Specifically, with the property of HSA, we propose two pathways to design the control and the force feedback: Sequential Control Force (SCF) and Joint Control Force (JCF). Both designs can achieve safety and stability but with different responses to the user's commands. We conducted experimental simulations to evaluate and investigate the properties of the designed methods. We also tested the proposed method on a physical quadrotor UAV and a haptic interface.

Complete complementary codes (CCCs) play a vital role not only in wireless communication, particularly in multicarrier systems where achieving an interference-free environment is of paramount importance, but also in the construction of other codes that necessitate appropriate functions to meet the diverse demands within today's landscape of wireless communication evaluation. This research is focused on the area of constructing $q$-ary functions for both of {traditional and spectrally null constraint (SNC) CCCs}\footnote{When no codes in CCCs having zero components, we call it as traditonal CCCs, else, we call it as SNC-CCCs in this pape.} of flexible length, set size and alphabet. We construct traditional CCCs with lengths, defined as $L = \prod_{i=1}^k p_i^{m_i}$, set sizes, defined as $K = \prod_{i=1}^k p_i^{n_i+1}$, and an alphabet size of $q=\prod_{i=1}^k p_i$, such that $p_1<p_2<\cdots<p_k $. The parameters $m_1, m_2, \ldots, m_k$ (each greater than or equal to $2$) are positive integers, while $n_1, n_2, \ldots, n_k$ are non-negative integers satisfying $n_i \leq m_i-1$, and the variable $k$ represents a positive integer. To achieve these specific parameters, we define $q$-ary functions over a domain $\mathbf{Z}_{p_1}^{m_1}\times \cdots \times \mathbf{Z}_{p_k}^{m_k}$ that is considered a proper subset of $\mathbb{Z}_{q}^m$ and encompasses $\prod_{i=1}^k p_i^{m_i}$ vectors, where $\mathbf{Z}_{p_i}^{m_i}=\{0,1,\hdots,p_i-1\}^{m_i}$, and the value of $m$ is derived from the sum of $m_1, m_2, \ldots, m_k$. This organization of the domain allows us to encompass all conceivable integer-valued length sequences over the alphabet $\mathbb{Z}_q$. It has been demonstrated that by constraining a $q$-ary function that generates traditional CCCs, we can derive SNC-CCCs with identical length and alphabet, yet a smaller or equal set size compared to the traditional CCCs.

Determining the optimal fidelity for the transmission of quantum information over noisy quantum channels is one of the central problems in quantum information theory. Recently, [Berta-Borderi-Fawzi-Scholz, Mathematical Programming, 2021] introduced an asymptotically converging semidefinite programming hierarchy of outer bounds for this quantity. However, the size of the semidefinite programs (SDPs) grows exponentially with respect to the level of the hierarchy, thus making their computation unscalable. In this work, by exploiting the symmetries in the SDP, we show that, for a fixed output dimension of the quantum channel, we can compute the SDP in time polynomial with respect to the level of the hierarchy and input dimension. As a direct consequence of our result, the optimal fidelity can be approximated with an accuracy of $\epsilon$ in $\mathrm{poly}(1/\epsilon, \text{input dimension})$ time.

Network slicing, a cornerstone technology for future networks, enables the creation of customized virtual networks on a shared physical infrastructure. This fosters innovation and agility by providing dedicated resources tailored to specific applications. However, current orchestration and management approaches face limitations in handling the complexity of new service demands within multi-administrative domain environments. This paper proposes a future vision for network slicing powered by Large Language Models (LLMs) and multi-agent systems, offering a framework that can be integrated with existing Management and Orchestration (MANO) frameworks. This framework leverages LLMs to translate user intent into technical requirements, map network functions to infrastructure, and manage the entire slice lifecycle, while multi-agent systems facilitate collaboration across different administrative domains. We also discuss the challenges associated with implementing this framework and potential solutions to mitigate them.

Graph neural networks (GNNs) have gained significant attraction due to their expansive real-world applications. To build trustworthy GNNs, two aspects - fairness and privacy - have emerged as critical considerations. Previous studies have separately examined the fairness and privacy aspects of GNNs, revealing their trade-off with GNN performance. Yet, the interplay between these two aspects remains unexplored. In this paper, we pioneer the exploration of the interaction between the privacy risks of edge leakage and the individual fairness of a GNN. Our theoretical analysis unravels that edge privacy risks unfortunately escalate when the nodes' individual fairness improves. Such an issue hinders the accomplishment of privacy and fairness of GNNs at the same time. To balance fairness and privacy, we carefully introduce fairness-aware loss reweighting based on influence function and privacy-aware graph structure perturbation modules within a fine-tuning mechanism. Experimental results underscore the effectiveness of our approach in achieving GNN fairness with limited performance compromise and controlled privacy risks. This work contributes to the comprehensively developing trustworthy GNNs by simultaneously addressing both fairness and privacy aspects.

The real-world data tends to be heavily imbalanced and severely skew the data-driven deep neural networks, which makes Long-Tailed Recognition (LTR) a massive challenging task. Existing LTR methods seldom train Vision Transformers (ViTs) with Long-Tailed (LT) data, while the off-the-shelf pretrain weight of ViTs always leads to unfair comparisons. In this paper, we systematically investigate the ViTs' performance in LTR and propose LiVT to train ViTs from scratch only with LT data. With the observation that ViTs suffer more severe LTR problems, we conduct Masked Generative Pretraining (MGP) to learn generalized features. With ample and solid evidence, we show that MGP is more robust than supervised manners. In addition, Binary Cross Entropy (BCE) loss, which shows conspicuous performance with ViTs, encounters predicaments in LTR. We further propose the balanced BCE to ameliorate it with strong theoretical groundings. Specially, we derive the unbiased extension of Sigmoid and compensate extra logit margins to deploy it. Our Bal-BCE contributes to the quick convergence of ViTs in just a few epochs. Extensive experiments demonstrate that with MGP and Bal-BCE, LiVT successfully trains ViTs well without any additional data and outperforms comparable state-of-the-art methods significantly, e.g., our ViT-B achieves 81.0% Top-1 accuracy in iNaturalist 2018 without bells and whistles. Code is available at //github.com/XuZhengzhuo/LiVT.

Residual networks (ResNets) have displayed impressive results in pattern recognition and, recently, have garnered considerable theoretical interest due to a perceived link with neural ordinary differential equations (neural ODEs). This link relies on the convergence of network weights to a smooth function as the number of layers increases. We investigate the properties of weights trained by stochastic gradient descent and their scaling with network depth through detailed numerical experiments. We observe the existence of scaling regimes markedly different from those assumed in neural ODE literature. Depending on certain features of the network architecture, such as the smoothness of the activation function, one may obtain an alternative ODE limit, a stochastic differential equation or neither of these. These findings cast doubts on the validity of the neural ODE model as an adequate asymptotic description of deep ResNets and point to an alternative class of differential equations as a better description of the deep network limit.

Deep convolutional neural networks (CNNs) have recently achieved great success in many visual recognition tasks. However, existing deep neural network models are computationally expensive and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Therefore, a natural thought is to perform model compression and acceleration in deep networks without significantly decreasing the model performance. During the past few years, tremendous progress has been made in this area. In this paper, we survey the recent advanced techniques for compacting and accelerating CNNs model developed. These techniques are roughly categorized into four schemes: parameter pruning and sharing, low-rank factorization, transferred/compact convolutional filters, and knowledge distillation. Methods of parameter pruning and sharing will be described at the beginning, after that the other techniques will be introduced. For each scheme, we provide insightful analysis regarding the performance, related applications, advantages, and drawbacks etc. Then we will go through a few very recent additional successful methods, for example, dynamic capacity networks and stochastic depths networks. After that, we survey the evaluation matrix, the main datasets used for evaluating the model performance and recent benchmarking efforts. Finally, we conclude this paper, discuss remaining challenges and possible directions on this topic.

We propose a novel single shot object detection network named Detection with Enriched Semantics (DES). Our motivation is to enrich the semantics of object detection features within a typical deep detector, by a semantic segmentation branch and a global activation module. The segmentation branch is supervised by weak segmentation ground-truth, i.e., no extra annotation is required. In conjunction with that, we employ a global activation module which learns relationship between channels and object classes in a self-supervised manner. Comprehensive experimental results on both PASCAL VOC and MS COCO detection datasets demonstrate the effectiveness of the proposed method. In particular, with a VGG16 based DES, we achieve an mAP of 81.7 on VOC2007 test and an mAP of 32.8 on COCO test-dev with an inference speed of 31.5 milliseconds per image on a Titan Xp GPU. With a lower resolution version, we achieve an mAP of 79.7 on VOC2007 with an inference speed of 13.0 milliseconds per image.

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial examples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate adversarial perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply AdvGAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.

北京阿比特科技有限公司