This paper is concerned with the proportional fairness (PF) of the spectral efficiency (SE) maximization of uplinks in a cell-free (CF) massive multiple-input multiple-output (MIMO) system in which a large number of single-antenna access points (APs) connected to a central processing unit (CPU) serve many single-antenna users. To detect the user signals, the APs use matched filters based on the local channel state information while the CPU deploys receiver filters based on knowledge of channel statistics. We devise the maximization problem of the SE PF, which maximizes the sum of the logarithm of the achievable user rates, as a jointly nonconvex optimization problem of receiver filter coefficients and user power allocation subject to user power constraints. To handle the challenges associated with the nonconvexity of the formulated design problem, we develop an iterative algorithm by alternatively finding optimal filter coefficients at the CPU and transmit powers at the users. While the filter coefficient design is formulated as a generalized eigenvalue problem, the power allocation problem is addressed by a gradient projection (GP) approach. Simulation results show that the SE PF maximization not only offers approximately the achievable sum rates as compared to the sum-rate maximization but also provides an improved trade-off between the user rate fairness and the achievable sum rate.
We consider distributed online min-max resource allocation with a set of parallel agents and a parameter server. Our goal is to minimize the pointwise maximum over a set of time-varying convex and decreasing cost functions, without a priori information about these functions. We propose a novel online algorithm, termed Distributed Online resource Re-Allocation (DORA), where non-stragglers learn to relinquish resource and share resource with stragglers. A notable feature of DORA is that it does not require gradient calculation or projection operation, unlike most existing online optimization strategies. This allows it to substantially reduce the computation overhead in large-scale and distributed networks. We show that the dynamic regret of the proposed algorithm is upper bounded by $O\left(T^{\frac{3}{4}}(1+P_T)^{\frac{1}{4}}\right)$, where $T$ is the total number of rounds and $P_T$ is the path-length of the instantaneous minimizers. We further consider an application to the bandwidth allocation problem in distributed online machine learning. Our numerical study demonstrates the efficacy of the proposed solution and its performance advantage over gradient- and/or projection-based resource allocation algorithms in reducing wall-clock time.
In this paper, a novel intelligent reflecting surface (IRS)-assisted wireless powered communication network (WPCN) architecture is proposed for power-constrained Internet-of-Things (IoT) smart devices, where IRS is exploited to improve the performance of WPCN under imperfect channel state information (CSI). We formulate a hybrid access point (HAP) transmit energy minimization problem by jointly optimizing time allocation, HAP energy beamforming, receiving beamforming, user transmit power allocation, IRS energy reflection coefficient and information reflection coefficient under the imperfect CSI and non-linear energy harvesting model. On account of the high coupling of optimization variables, the formulated problem is a non-convex optimization problem that is difficult to solve directly. To address the above-mentioned challenging problem, alternating optimization (AO) technique is applied to decouple the optimization variables to solve the problem. Specifically, through AO, time allocation, HAP energy beamforming, receiving beamforming, user transmit power allocation, IRS energy reflection coefficient and information reflection coefficient are divided into three sub-problems to be solved alternately. The difference-of-convex (DC) programming is used to solve the non-convex rank-one constraint in solving IRS energy reflection coefficient and information reflection coefficient. Numerical simulations verify the superiority of the proposed optimization algorithm in decreasing HAP transmit energy compared with other benchmark schemes.
The multi-user Holographic Multiple-Input and Multiple-Output Surface (MU-HMIMOS) paradigm, which is capable of realizing large continuous apertures with minimal power consumption, has been recently considered as an energyefficient solution for future wireless networks, offering the increased flexibility in impacting electromagnetic wave propagation according to the desired communication, localization, and sensing objectives. The tractable channel modeling of MU-HMIMOS systems is one of the most critical challenges, mainly due to the coupling effect induced by the excessively large number of closely spaced patch antennas. In this paper, we focus on this challenge for downlink multi-user communications and model the electromagnetic channel in the wavenumber domain using the Fourier plane wave representation. Based on the proposed channel model, we devise the maximum-ratio transmission and Zero-Forcing (ZF) precoding schemes capitalizing on the sampled channel variance that depends on the number and spacing of the patch antennas in MU-HMIMOS, and present their analytical spectral efficiency performance. Moreover, we propose a low computational ZF precoding scheme leveraging Neumann series expansion to replace the matrix inversion, since it is practically impossible to perform direct matrix inversion when the number of patch antennas is extremely large. Our extensive simulation results showcase the impact of the number of patch antennas and their spacing on the spectral efficiency of the considered systems. It is shown that the more patch antennas and larger spacing results in improved performance due to the decreased correlation among the patches.
De-homogenization is becoming an effective method to significantly expedite the design of high-resolution multiscale structures, but existing methods have thus far been confined to simple static compliance minimization. There are two critical challenges to be addressed in accommodating general cases: enabling the design of unit-cell orientation and using free-form microstructures. In this paper, we propose a data-driven de-homogenization method that allows effective design of the unit-cell orientation angles and conformal mapping of spatially varying, complex microstructures. We devise a parameterized microstructure composed of rods in different directions to provide more diversity in stiffness while retaining geometrical simplicity. The microstructural geometry-property relationship is then surrogated by a neural network to avoid costly homogenization. A Cartesian representation of the unit-cell orientation is incorporated into homogenization-based optimization to design the angles. Corresponding high-resolution multiscale structures are obtained from the homogenization-based designs through a conformal mapping constructed with sawtooth function fields. This allows us to assemble complex microstructures with an oriented and compatible tiling pattern, while preserving the local homogenized properties. To demonstrate our method with a specific application, we optimize the frequency response of structures under harmonic excitations within a given frequency range. It is the first time that a sawtooth function is applied in a de-homogenization framework for complex design scenarios beyond static compliance minimization. The examples illustrate that multiscale structures can be generated with high efficiency and much better dynamic performance compared with the macroscale-only optimization. Beyond frequency response design, our proposed framework can be applied to other general problems.
The sum-utility maximization problem is known to be important in the energy systems literature. The conventional assumption to address this problem is that the utility is concave. But for some key applications, such an assumption is not reasonable and does not reflect well the actual behavior of the consumer. To address this issue, the authors pose and address a more general optimization problem, namely by assuming the consumer's utility to be sigmoidal and in a given class of functions. The considered class of functions is very attractive for at least two reasons. First, the classical NP-hardness issue associated with sum-utility maximization is circumvented. Second, the considered class of functions encompasses well-known performance metrics used to analyze the problems of pricing and energy-efficiency. This allows one to design a new and optimal inclining block rates (IBR) pricing policy which also has the virtue of flattening the power consumption and reducing the peak power. We also show how to maximize the energy-efficiency by a low-complexity algorithm. When compared to existing policies, simulations fully support the benefit from using the proposed approach.
This paper presents two hybrid beamforming (HYBF) designs for a multi-user multi-cell millimeter (mmWave) full-duplex (FD) system. The base stations (BSs) and the users are assumed to be suffering from the limited dynamic range (LDR) noise. Firstly, we present a centralized HYBF (C-HYBF) scheme based on alternating optimization. In general, the complexity of C-HYBF schemes scales quadratically as a function of the number of users, which is very undesirable. Moreover, tremendous computational power is required to optimize numerous variables jointly in FD. Another major drawback is that huge communication overhead is also required to transfer complete channel state information (CSI) to the central node every channel coherence time. To overcome these drawbacks, we present a very low-complexity and highly scalable cooperative per-link parallel and distributed (P$\&$D)-HYBF scheme. It allows each FD BS to update the beamformers for its users independently in parallel on different computational processors. Its complexity scales only linearly as the network size grows, making it desirable for the next generation of large and dense mmWave FD networks. Simulation results show that both designs significantly outperform the fully digital half-duplex (HD) system with only a few radio-frequency (RF) chains, achieve similar performance, and the P$\&$D-HYBF requires considerably less execution time.
The increasing number of wireless devices operating in unlicensed spectrum motivates the development of intelligent adaptive approaches to spectrum access that go beyond traditional carrier sensing. We develop a novel distributed implementation of a policy gradient method known as Proximal Policy Optimization modelled on a two stage Markov decision process that enables such an intelligent approach, and still achieves decentralized contention-based medium access. In each time slot, a base station (BS) uses information from spectrum sensing and reception quality to autonomously decide whether or not to transmit on a given resource, with the goal of maximizing proportional fairness network-wide. Empirically, we find the proportional fairness reward accumulated by the policy gradient approach to be significantly higher than even a genie-aided adaptive energy detection threshold. This is further validated by the improved sum and maximum user throughputs achieved by our approach.
Recent work has proposed stochastic Plackett-Luce (PL) ranking models as a robust choice for optimizing relevance and fairness metrics. Unlike their deterministic counterparts that require heuristic optimization algorithms, PL models are fully differentiable. Theoretically, they can be used to optimize ranking metrics via stochastic gradient descent. However, in practice, the computation of the gradient is infeasible because it requires one to iterate over all possible permutations of items. Consequently, actual applications rely on approximating the gradient via sampling techniques. In this paper, we introduce a novel algorithm: PL-Rank, that estimates the gradient of a PL ranking model w.r.t. both relevance and fairness metrics. Unlike existing approaches that are based on policy gradients, PL-Rank makes use of the specific structure of PL models and ranking metrics. Our experimental analysis shows that PL-Rank has a greater sample-efficiency and is computationally less costly than existing policy gradients, resulting in faster convergence at higher performance. PL-Rank further enables the industry to apply PL models for more relevant and fairer real-world ranking systems.
Training datasets for machine learning often have some form of missingness. For example, to learn a model for deciding whom to give a loan, the available training data includes individuals who were given a loan in the past, but not those who were not. This missingness, if ignored, nullifies any fairness guarantee of the training procedure when the model is deployed. Using causal graphs, we characterize the missingness mechanisms in different real-world scenarios. We show conditions under which various distributions, used in popular fairness algorithms, can or can not be recovered from the training data. Our theoretical results imply that many of these algorithms can not guarantee fairness in practice. Modeling missingness also helps to identify correct design principles for fair algorithms. For example, in multi-stage settings where decisions are made in multiple screening rounds, we use our framework to derive the minimal distributions required to design a fair algorithm. Our proposed algorithm decentralizes the decision-making process and still achieves similar performance to the optimal algorithm that requires centralization and non-recoverable distributions.
Developing classification algorithms that are fair with respect to sensitive attributes of the data has become an important problem due to the growing deployment of classification algorithms in various social contexts. Several recent works have focused on fairness with respect to a specific metric, modeled the corresponding fair classification problem as a constrained optimization problem, and developed tailored algorithms to solve them. Despite this, there still remain important metrics for which we do not have fair classifiers and many of the aforementioned algorithms do not come with theoretical guarantees; perhaps because the resulting optimization problem is non-convex. The main contribution of this paper is a new meta-algorithm for classification that takes as input a large class of fairness constraints, with respect to multiple non-disjoint sensitive attributes, and which comes with provable guarantees. This is achieved by first developing a meta-algorithm for a large family of classification problems with convex constraints, and then showing that classification problems with general types of fairness constraints can be reduced to those in this family. We present empirical results that show that our algorithm can achieve near-perfect fairness with respect to various fairness metrics, and that the loss in accuracy due to the imposed fairness constraints is often small. Overall, this work unifies several prior works on fair classification, presents a practical algorithm with theoretical guarantees, and can handle fairness metrics that were previously not possible.