亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we aim at maximizing the weighted sum-rate (WSR) of rate splitting multiple access (RSMA) in multi-user multi-antenna transmission networks through the joint optimization of rate allocation and beamforming. Unlike conventional methods like weighted minimum mean square error (WMMSE) and standard fractional programming (FP), which tackle the non-convex WSR problem iteratively using disciplined convex subproblems and optimization toolboxes, our work pioneers a novel toolbox-free approach. For the first time, we identify the optimal beamforming structure and common rate allocation for WSR maximization in RSMA by leveraging FP and Lagrangian duality. Then we propose an algorithm based on FP and fixed point iteration to optimize the beamforming and common rate allocation without the need for optimization toolboxes. Our numerical results demonstrate that the proposed algorithm attains the same performance as standard FP and classical WMMSE methods while significantly reducing computational time.

相關內容

In this paper, we employ Singular Value Canonical Correlation Analysis (SVCCA) to analyze representations learnt in a multilingual end-to-end speech translation model trained over 22 languages. SVCCA enables us to estimate representational similarity across languages and layers, enhancing our understanding of the functionality of multilingual speech translation and its potential connection to multilingual neural machine translation. The multilingual speech translation model is trained on the CoVoST 2 dataset in all possible directions, and we utilize LASER to extract parallel bitext data for SVCCA analysis. We derive three major findings from our analysis: (I) Linguistic similarity loses its efficacy in multilingual speech translation when the training data for a specific language is limited. (II) Enhanced encoder representations and well-aligned audio-text data significantly improve translation quality, surpassing the bilingual counterparts when the training data is not compromised. (III) The encoder representations of multilingual speech translation demonstrate superior performance in predicting phonetic features in linguistic typology prediction. With these findings, we propose that releasing the constraint of limited data for low-resource languages and subsequently combining them with linguistically related high-resource languages could offer a more effective approach for multilingual end-to-end speech translation.

In this paper, we study how to fairly allocate a set of m indivisible chores to a group of n agents, each of which has a general additive cost function on the items. Since envy-free (EF) allocations are not guaranteed to exist, we consider the notion of envy-freeness up to any item (EFX). In contrast to the fruitful results regarding the (approximation of) EFX allocations for goods, very little is known for the allocation of chores. Prior to our work, for the allocation of chores, it is known that EFX allocations always exist for two agents or general number of agents with identical ordering cost functions. For general instances, no non-trivial approximation result regarding EFX allocation is known. In this paper, we make progress in this direction by providing several polynomial time algorithms for the computation of EFX and approximately EFX allocations. We show that for three agents we can always compute a 4.45-approximation of EFX allocation. For n>=4 agents, our algorithm always computes a (3n^2-n)-approximation. We also study the bi-valued instances, in which agents have at most two cost values on the chores. For three agents, we provide an algorithm for the computation of EFX allocations. For n>=4 agents, we present algorithms for the computation of partial EFX allocations with at most n-1 unallocated items; and (n-1)-approximation of EFX allocations.

This paper studies a multiaccess coded caching (MACC) where the connectivity topology between the users and the caches can be described by a class of combinatorial designs. Our model includes as special cases several MACC topologies considered in previous works. The considered MACC network includes a server containing $N$ files, $\Gamma$ cache nodes and $K$ cacheless users, where each user can access $L$ cache nodes. The server is connected to the users via an error-free shared link, while the users can retrieve the cache content of the connected cache-nodes while the users can directly access the content in their connected cache-nodes. Our goal is to minimise the worst-case transmission load on the shared link in the delivery phase. The main limitation of the existing MACC works is that only some specific access topologies are considered, and thus the number of users $K$ should be either linear or exponential to $\Gamma$. We overcome this limitation by formulating a new access topology derived from two classical combinatorial structures, referred to as the $t$-design and the $t$-group divisible design. In these topologies, $K$ scales linearly, polynomially, or even exponentially with $\Gamma$. By leveraging the properties of the considered combinatorial structures, we propose two classes of coded caching schemes for a flexible number of users, where the number of users can scale linearly, polynomially or exponentially with the number of cache nodes. In addition, our schemes can unify most schemes for the shared link network and unify many schemes for the multi-access network except for the cyclic wrap-around topology.

In this paper, we propose a certificate sharing system based on blockchain that gives students authority and control over their academic certificates. Our strategy involves developing blockchain-based NFT certifications that can be shared with institutions or employers using blockchain addresses. Students may access the data created by each individual institute in a single platform, filter the view of the relevant courses according to their requirements, and mint their certificate metadata as NFTs. This method provides accountability of access, comprehensive records that are permanently maintained in IPFS, and verifiable provenance for creating, distributing, and accessing certificates. It also makes it possible to share certificates more safely and efficiently. By incorporating trust factors through data provenance, our system provides a countermeasure against issues such as fake and duplicate certificates. It addresses the challenge of the traditional certificate verification processes, which are lengthy manual process. With this system, students can manage and validate their academic credentials from multiple institutions in one location while ensuring authenticity and confidentiality using digital signatures and hashing for data protection against unauthorized access. Overall, our suggested system ensures data safety, accountability, and confidentiality while offering a novel approach to certificate distribution.

In this paper, we prove the first Bayesian regret bounds for Thompson Sampling in reinforcement learning in a multitude of settings. We simplify the learning problem using a discrete set of surrogate environments, and present a refined analysis of the information ratio using posterior consistency. This leads to an upper bound of order $\widetilde{O}(H\sqrt{d_{l_1}T})$ in the time inhomogeneous reinforcement learning problem where $H$ is the episode length and $d_{l_1}$ is the Kolmogorov $l_1-$dimension of the space of environments. We then find concrete bounds of $d_{l_1}$ in a variety of settings, such as tabular, linear and finite mixtures, and discuss how how our results are either the first of their kind or improve the state-of-the-art.

Topology can extract the structural information in a dataset efficiently. In this paper, we attempt to incorporate topological information into a multiple output Gaussian process model for transfer learning purposes. To achieve this goal, we extend the framework of circular coordinates into a novel framework of mixed valued coordinates to take linear trends in the time series into consideration. One of the major challenges to learn from multiple time series effectively via a multiple output Gaussian process model is constructing a functional kernel. We propose to use topologically induced clustering to construct a cluster based kernel in a multiple output Gaussian process model. This kernel not only incorporates the topological structural information, but also allows us to put forward a unified framework using topological information in time and motion series.

In this paper, we investigate federated clustering (FedC) problem, that aims to accurately partition unlabeled data samples distributed over massive clients into finite clusters under the orchestration of a parameter server, meanwhile considering data privacy. Though it is an NP-hard optimization problem involving real variables denoting cluster centroids and binary variables denoting the cluster membership of each data sample, we judiciously reformulate the FedC problem into a non-convex optimization problem with only one convex constraint, accordingly yielding a soft clustering solution. Then a novel FedC algorithm using differential privacy (DP) technique, referred to as DP-FedC, is proposed in which partial clients participation and multiple local model updating steps are also considered. Furthermore, various attributes of the proposed DP-FedC are obtained through theoretical analyses of privacy protection and convergence rate, especially for the case of non-identically and independently distributed (non-i.i.d.) data, that ideally serve as the guidelines for the design of the proposed DP-FedC. Then some experimental results on two real datasets are provided to demonstrate the efficacy of the proposed DP-FedC together with its much superior performance over some state-of-the-art FedC algorithms, and the consistency with all the presented analytical results.

In this paper, we present an accurate and scalable approach to the face clustering task. We aim at grouping a set of faces by their potential identities. We formulate this task as a link prediction problem: a link exists between two faces if they are of the same identity. The key idea is that we find the local context in the feature space around an instance (face) contains rich information about the linkage relationship between this instance and its neighbors. By constructing sub-graphs around each instance as input data, which depict the local context, we utilize the graph convolution network (GCN) to perform reasoning and infer the likelihood of linkage between pairs in the sub-graphs. Experiments show that our method is more robust to the complex distribution of faces than conventional methods, yielding favorably comparable results to state-of-the-art methods on standard face clustering benchmarks, and is scalable to large datasets. Furthermore, we show that the proposed method does not need the number of clusters as prior, is aware of noises and outliers, and can be extended to a multi-view version for more accurate clustering accuracy.

In this paper, we introduce the Reinforced Mnemonic Reader for machine reading comprehension tasks, which enhances previous attentive readers in two aspects. First, a reattention mechanism is proposed to refine current attentions by directly accessing to past attentions that are temporally memorized in a multi-round alignment architecture, so as to avoid the problems of attention redundancy and attention deficiency. Second, a new optimization approach, called dynamic-critical reinforcement learning, is introduced to extend the standard supervised method. It always encourages to predict a more acceptable answer so as to address the convergence suppression problem occurred in traditional reinforcement learning algorithms. Extensive experiments on the Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-the-art results. Meanwhile, our model outperforms previous systems by over 6% in terms of both Exact Match and F1 metrics on two adversarial SQuAD datasets.

In this paper, we propose a conceptually simple and geometrically interpretable objective function, i.e. additive margin Softmax (AM-Softmax), for deep face verification. In general, the face verification task can be viewed as a metric learning problem, so learning large-margin face features whose intra-class variation is small and inter-class difference is large is of great importance in order to achieve good performance. Recently, Large-margin Softmax and Angular Softmax have been proposed to incorporate the angular margin in a multiplicative manner. In this work, we introduce a novel additive angular margin for the Softmax loss, which is intuitively appealing and more interpretable than the existing works. We also emphasize and discuss the importance of feature normalization in the paper. Most importantly, our experiments on LFW BLUFR and MegaFace show that our additive margin softmax loss consistently performs better than the current state-of-the-art methods using the same network architecture and training dataset. Our code has also been made available at //github.com/happynear/AMSoftmax

北京阿比特科技有限公司