亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

We study the problem of optimal power allocation in single-hop multi-antenna ad-hoc wireless networks. A standard technique to solve this problem involves optimizing a tri-convex function under power constraints using a block-coordinate-descent (BCD) based iterative algorithm. This approach, termed WMMSE, tends to be computationally complex and time consuming. Several learning-based approaches have been proposed to speed up the power allocation process. A recent work, UWMMSE, learns an affine transformation of a WMMSE parameter in an unfolded structure to accelerate convergence. In spite of achieving promising results, its application is limited to single-antenna wireless networks. In this work, we present a UWMMSE framework for power allocation in (multiple-input multiple-output) MIMO interference networks. Through an empirical study, we illustrate the superiority of our approach in comparison to WMMSE and also analyze its robustness to changes in channel conditions and network size.

相關內容

Explanation:無線網。 Publisher:Springer。 SIT:

In this paper, a novel uplink communication for the transmissive reconfigurable metasurface (RMS) multi-antenna system with orthogonal frequency division multiple access (OFDMA) is investigated. Specifically, a transmissive RMS-based receiver equipped with a single receiving antenna is first proposed, and a far-near field channel model based on planar waves and spherical waves is given. Then, in order to maximize the system sum-rate of uplink communications, we formulate a joint optimization problem over subcarrier allocation, power allocation and RMS transmissive coefficient design. Due to the coupling of optimization variables, the optimization problem is non-convex, so it is challenging to solve it directly. In order to tackle this problem, the alternating optimization (AO) algorithm is used to decouple the optimization variables and divide the problem into two sub-problems to solve. First, the problem of joint subcarrier allocation and power allocation is solved via the Lagrangian dual decomposition method. Then, the RMS transmissive coefficient design can be obtained by applying difference-of-convex (DC) programming, successive convex approximation (SCA) and penalty function methods. Finally, the two sub-problems are iterated alternately until convergence is achieved. Numerical simulation results verify that the proposed algorithm has good convergence performance and can improve system sum-rate compared with other benchmark algorithms.

We consider the concatenation of a convolutional code (CC) with an optimized cyclic redundancy check (CRC) code as a promising paradigm for good short blocklength codes. The resulting CRC-aided convolutional code naturally permits the use of serial list Viterbi decoding (SLVD) to achieve maximum-likelihood decoding. The convolutional encoder of interest is of rate-$1/\omega$ and the convolutional code is either zero-terminated (ZT) or tail-biting (TB). The resulting CRC-aided convolutional code is called a CRC-ZTCC or a CRC-TBCC. To design a good CRC-aided convolutional code, we propose the distance-spectrum optimal (DSO) CRC polynomial. A DSO CRC search algorithm for the TBCC is provided. Our analysis reveals that the complexity of SLVD is governed by the expected list rank which converges to $1$ at high SNR. This allows a good performance to be achieved with a small increase in complexity. In this paper, we focus on transmitting $64$ information bits with a rate-$1/2$ convolutional encoder. For a target error probability $10^{-4}$, simulations show that the best CRC-ZTCC approaches the random-coding union (RCU) bound within $0.4$ dB. Several CRC-TBCCs outperform the RCU bound at moderate SNR values.

In addition to traditional concerns such as throughput and latency, freshness is becoming increasingly important. To stay fresh, applications stream status updates among their components. Existing studies propose the metric age of information (AoI) to gauge the freshness and design systems to achieve low AoI. Despite active research in this area, existing results are not applicable to general wired networks for two reasons. First, they focus on wireless settings where AoI is mostly affected by interference and collision while queueing is more dominant in wired settings. Second, the legacy drop-adverse flows are not taken into account in the literature. Scheduling mixed flows with distinct performance objective is not yet addressed. In this paper, we study wired networks shared by two classes of flows, aiming for high throughput and low AoI respectively, and achieve a good trade-off between their throughput and AoI. Our approach to the problem consists of two layers: freshness-aware traffic engineering (FATE) and in-network freshness control (IFC). FATE derives sending rate/update frequency for flows via optimization, and its solution is then enforced by IFC through efficient scheduling mechanisms at each outport of in-network nodes. We also present efficient Linux implementation of IFC and demonstrate the effectiveness of FATE/IFC through extensive emulations. Our results show that it is possible to trade a little throughput (5 % lower) for much shorter AoI (49 to 71% shorter) compared to state-of-the-art traffic engineering.

We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit (ReLU) activation. Recent literature has explored the use of a single neural network to model either uncertain or complex elements within an objective function. However, it is well known that ensembles of neural networks produce more stable predictions and have better generalizability than models with single neural networks, which suggests the application of ensembles of neural networks in a decision-making pipeline. We study how to incorporate a neural network ensemble as the objective function of an optimization model and explore computational approaches for the ensuing problem. We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network. We develop two acceleration techniques for our model, the first one is a preprocessing procedure to tighten bounds for critical neurons in the neural network while the second one is a set of valid inequalities based on Benders decomposition. Experimental evaluations of our solution methods are conducted on one global optimization problem and two real-world data sets; the results suggest that our optimization algorithm outperforms the adaption of an state-of-the-art approach in terms of computational time and optimality gaps.

Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization. However, their optimization properties are less well understood. We take the first step towards analyzing GNN training by studying the gradient dynamics of GNNs. First, we analyze linearized GNNs and prove that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that we validate on real-world graphs. Second, we study what may affect the GNNs' training speed. Our results show that the training of GNNs is implicitly accelerated by skip connections, more depth, and/or a good label distribution. Empirical results confirm that our theoretical results for linearized GNNs align with the training behavior of nonlinear GNNs. Our results provide the first theoretical support for the success of GNNs with skip connections in terms of optimization, and suggest that deep GNNs with skip connections would be promising in practice.

We propose to pre-train a unified language model for both autoencoding and partially autoregressive language modeling tasks using a novel training procedure, referred to as a pseudo-masked language model (PMLM). Given an input text with masked tokens, we rely on conventional masks to learn inter-relations between corrupted tokens and context via autoencoding, and pseudo masks to learn intra-relations between masked spans via partially autoregressive modeling. With well-designed position embeddings and self-attention masks, the context encodings are reused to avoid redundant computation. Moreover, conventional masks used for autoencoding provide global masking information, so that all the position embeddings are accessible in partially autoregressive language modeling. In addition, the two tasks pre-train a unified language model as a bidirectional encoder and a sequence-to-sequence decoder, respectively. Our experiments show that the unified language models pre-trained using PMLM achieve new state-of-the-art results on a wide range of natural language understanding and generation tasks across several widely used benchmarks.

In this paper, we propose Latent Relation Language Models (LRLMs), a class of language models that parameterizes the joint distribution over the words in a document and the entities that occur therein via knowledge graph relations. This model has a number of attractive properties: it not only improves language modeling performance, but is also able to annotate the posterior probability of entity spans for a given text through relations. Experiments demonstrate empirical improvements over both a word-based baseline language model and a previous approach that incorporates knowledge graph information. Qualitative analysis further demonstrates the proposed model's ability to learn to predict appropriate relations in context.

Asynchronous momentum stochastic gradient descent algorithms (Async-MSGD) is one of the most popular algorithms in distributed machine learning. However, its convergence properties for these complicated nonconvex problems is still largely unknown, because of the current technical limit. Therefore, in this paper, we propose to analyze the algorithm through a simpler but nontrivial nonconvex problem - streaming PCA, which helps us to understand Aync-MSGD better even for more general problems. Specifically, we establish the asymptotic rate of convergence of Async-MSGD for streaming PCA by diffusion approximation. Our results indicate a fundamental tradeoff between asynchrony and momentum: To ensure convergence and acceleration through asynchrony, we have to reduce the momentum (compared with Sync-MSGD). To the best of our knowledge, this is the first theoretical attempt on understanding Async-MSGD for distributed nonconvex stochastic optimization. Numerical experiments on both streaming PCA and training deep neural networks are provided to support our findings for Async-MSGD.

Developing classification algorithms that are fair with respect to sensitive attributes of the data has become an important problem due to the growing deployment of classification algorithms in various social contexts. Several recent works have focused on fairness with respect to a specific metric, modeled the corresponding fair classification problem as a constrained optimization problem, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which we do not have fair classifiers and many of the aforementioned algorithms do not come with theoretical guarantees; perhaps because the resulting optimization problem is non-convex. The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees. This is achieved by first developing a meta-algorithm for a large family of classification problems with convex constraints, and then showing that classification problems with general types of fairness constraints can be reduced to those in this family. We present empirical results that show that our algorithm can achieve near-perfect fairness with respect to various fairness metrics, and that the loss in accuracy due to the imposed fairness constraints is often small. Overall, this work unifies several prior works on fair classification, presents a practical algorithm with theoretical guarantees, and can handle fairness metrics that were previously not possible.

Social network analysis provides meaningful information about behavior of network members that can be used for diverse applications such as classification, link prediction. However, network analysis is computationally expensive because of feature learning for different applications. In recent years, many researches have focused on feature learning methods in social networks. Network embedding represents the network in a lower dimensional representation space with the same properties which presents a compressed representation of the network. In this paper, we introduce a novel algorithm named "CARE" for network embedding that can be used for different types of networks including weighted, directed and complex. Current methods try to preserve local neighborhood information of nodes, whereas the proposed method utilizes local neighborhood and community information of network nodes to cover both local and global structure of social networks. CARE builds customized paths, which are consisted of local and global structure of network nodes, as a basis for network embedding and uses the Skip-gram model to learn representation vector of nodes. Subsequently, stochastic gradient descent is applied to optimize our objective function and learn the final representation of nodes. Our method can be scalable when new nodes are appended to network without information loss. Parallelize generation of customized random walks is also used for speeding up CARE. We evaluate the performance of CARE on multi label classification and link prediction tasks. Experimental results on various networks indicate that the proposed method outperforms others in both Micro and Macro-f1 measures for different size of training data.

北京阿比特科技有限公司