This paper proposes networked dynamics to solve resource allocation problems over time-varying multi-agent networks. The state of each agent represents the amount of used resources (or produced utilities) while the total amount of resources is fixed. The idea is to optimally allocate the resources among the group of agents by minimizing the overall cost function subject to fixed sum of resources. Each agents' information is restricted to its own state and cost function and those of its immediate in-neighbors. This is motivated by distributed applications such as mobile edge-computing, economic dispatch over smart grids, and multi-agent coverage control. This work provides a fast convergent solution (in comparison with linear dynamics) while considering relaxed network connectivity with quantized communication links. The proposed dynamics reaches optimal solution over switching (possibly disconnected) undirected networks as far as their union over some bounded non-overlapping time-intervals has a spanning-tree. We prove feasibility of the solution, uniqueness of the optimal state, and convergence to the optimal value under the proposed dynamics, where the analysis is applicable to similar 1st-order allocation dynamics with strongly sign-preserving nonlinearities, such as actuator saturation.
In interactive coding, Alice and Bob wish to compute some function $f$ of their individual private inputs $x$ and $y$. They do this by engaging in a non-adaptive (fixed order, fixed length) protocol to jointly compute $f(x,y)$. The goal is to do this in an error-resilient way, such that even given some fraction of adversarial corruptions, both parties still learn $f(x,y)$. In this work, we study the optimal error resilience of such a protocol in the face of adversarial bit flip or erasures. While the optimal error resilience of such a protocol over a large alphabet is well understood, the situation over the binary alphabet has remained open. In this work, we resolve this problem of determining the optimal error resilience over binary channels. In particular, we construct protocols achieving $\frac16$ error resilience over the binary bit flip channel and $\frac12$ error resilience over the binary erasure channel, for both of which matching upper bounds are known. We remark that the communication complexity of our binary bit flip protocol is polynomial in the size of the inputs, and the communication complexity of our binary erasure protocol is linear in the size of the minimal noiseless protocol computing $f$.
There is a continuous growth in demand for time sensitive applications which has shifted the cloud paradigm from a centralized computing architecture towards distributed heterogeneous computing platforms where resources located at the edge of the network are used to provide cloud-like services. This paradigm is widely known as fog computing. Virtual machines (VMs) have been widely utilized in both paradigms to enhance the network scalability, improve resource utilization, and energy efficiency. Moreover, Passive Optical Networks (PONs) are a technology suited to handling the enormous volumes of data generated in the access network due to their energy efficiency and large bandwidth. In this paper, we utilize a PON to provide the connectivity between multiple distributed fog units to achieve federated (i.e. cooperative) computing units in the access network to serve intensive demands. We propose a mixed integer linear program (MILP) to optimize the VM placement in the federated fog computing units with the objective of minimizing the total power consumption while considering inter-VM traffic. The results show a significant power saving as a result of the proposed optimization model by up to 52%, in the VM-allocation compared to a baseline approach that allocates the VM requests while neglecting the power consumption and inter-VMs traffic in the optimization framework.
We present distributed methods for jointly optimizing Intelligent Reflecting Surface (IRS) phase-shifts and beamformers in a cellular network. The proposed schemes require knowledge of only the intra-cell training sequences and corresponding received signals without explicit channel estimation. Instead, an SINR objective is estimated via sample means and maximized directly. This automatically includes and mitigates both intra- and inter-cell interference provided that the uplink training is synchronized across cells. Different schemes are considered that limit the set of known training sequences from interferers. With MIMO links an iterative synchronous bi-directional training scheme jointly optimizes the IRS parameters with the beamformers and combiners. Simulation results show that the proposed distributed methods show a modest performance degradation compared to centralized channel estimation schemes, which estimate and exchange all cross-channels between cells, and perform significantly better than channel estimation schemes which ignore the inter-cell interference.
Online allocation problems with resource constraints have a rich history in operations research. In this paper, we introduce the \emph{regularized online allocation problem}, a variant that includes a non-linear regularizer acting on the total resource consumption. In this problem, requests repeatedly arrive over time and, for each request, a decision maker needs to take an action that generates a reward and consumes resources. The objective is to simultaneously maximize additively separable rewards and the value of a non-separable regularizer subject to the resource constraints. Our primary motivation is allowing decision makers to trade off separable objectives such as the economic efficiency of an allocation with ancillary, non-separable objectives such as the fairness or equity of an allocation. We design an algorithm that is simple, fast, and attains good performance with both stochastic i.i.d.~and adversarial inputs. In particular, our algorithm is asymptotically optimal under stochastic i.i.d. input models and attains a fixed competitive ratio that depends on the regularizer when the input is adversarial. Furthermore, the algorithm and analysis do not require convexity or concavity of the reward function and the consumption function, which allows more model flexibility. Numerical experiments confirm the effectiveness of the proposed algorithm and of regularization in an internet advertising application.
5G technology allows the presence of heterogeneous services in the same physical network. On the radio access network (RAN), the spectrum slicing of the shared radio resources is a critical task to guarantee the performance of each service. In this paper, we analyze a downlink communication in which a base station (BS) should serve two types of traffic, enhanced mobile broadband (eMBB) and ultra-reliable low-latency communication (URLLC), respectively. Due to the nature of low-latency traffic, the BS knows the channel state information (CSI) of the eMBB users only. In this setting, we study the power minimization problem employing orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) schemes. We analyze the impact of resource sharing, showing that the knowledge of eMBB CSI can be used also in resource allocation for URLLC users. Based on this analysis, we propose two algorithms: a feasible and a block coordinated descent approach (BCD). We show that the BCD is optimal for the URLLC power allocation. The numerical results show that NOMA leads to a lower power consumption compared to OMA, except when the URLLC user is very close to the BS. For the last case, the optimal approach depends on the channel condition of the eMBB user. In any case, even when the OMA paradigm attains the best performance, the gap with NOMA is negligible, proving the NOMA capacity in exploiting the shared resources to reduce the power consumption in every condition.
Dubbed as the next-generation Internet, the metaverse is a virtual world that allows users to interact with each other or objects in real-time using their avatars. The metaverse is envisioned to support novel ecosystems of service provision in an immersive environment brought about by an intersection of the virtual and physical worlds. The native AI systems in metaverse will personalized user experience over time and shape the experience in a scalable, seamless, and synchronous way. However, the metaverse is characterized by diverse resource types amid a highly dynamic demand environment. In this paper, we propose the case study of virtual education in the metaverse and address the unified resource allocation problem amid stochastic user demand. We propose a stochastic optimal resource allocation scheme (SORAS) based on stochastic integer programming with the objective of minimizing the cost of the virtual service provider. The simulation results show that SORAS can minimize the cost of the virtual service provider while accounting for the users' demands uncertainty.
Localizing individuals in crowds is more in accordance with the practical demands of subsequent high-level crowd analysis tasks than simply counting. However, existing localization based methods relying on intermediate representations (\textit{i.e.}, density maps or pseudo boxes) serving as learning targets are counter-intuitive and error-prone. In this paper, we propose a purely point-based framework for joint crowd counting and individual localization. For this framework, instead of merely reporting the absolute counting error at image level, we propose a new metric, called density Normalized Average Precision (nAP), to provide more comprehensive and more precise performance evaluation. Moreover, we design an intuitive solution under this framework, which is called Point to Point Network (P2PNet). P2PNet discards superfluous steps and directly predicts a set of point proposals to represent heads in an image, being consistent with the human annotation results. By thorough analysis, we reveal the key step towards implementing such a novel idea is to assign optimal learning targets for these proposals. Therefore, we propose to conduct this crucial association in an one-to-one matching manner using the Hungarian algorithm. The P2PNet not only significantly surpasses state-of-the-art methods on popular counting benchmarks, but also achieves promising localization accuracy. The codes will be available at: //github.com/TencentYoutuResearch/CrowdCounting-P2PNet.
The aim of this work is to develop a fully-distributed algorithmic framework for training graph convolutional networks (GCNs). The proposed method is able to exploit the meaningful relational structure of the input data, which are collected by a set of agents that communicate over a sparse network topology. After formulating the centralized GCN training problem, we first show how to make inference in a distributed scenario where the underlying data graph is split among different agents. Then, we propose a distributed gradient descent procedure to solve the GCN training problem. The resulting model distributes computation along three lines: during inference, during back-propagation, and during optimization. Convergence to stationary solutions of the GCN training problem is also established under mild conditions. Finally, we propose an optimization criterion to design the communication topology between agents in order to match with the graph describing data relationships. A wide set of numerical results validate our proposal. To the best of our knowledge, this is the first work combining graph convolutional neural networks with distributed optimization.
In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.
In this paper, we study the optimal convergence rate for distributed convex optimization problems in networks. We model the communication restrictions imposed by the network as a set of affine constraints and provide optimal complexity bounds for four different setups, namely: the function $F(\xb) \triangleq \sum_{i=1}^{m}f_i(\xb)$ is strongly convex and smooth, either strongly convex or smooth or just convex. Our results show that Nesterov's accelerated gradient descent on the dual problem can be executed in a distributed manner and obtains the same optimal rates as in the centralized version of the problem (up to constant or logarithmic factors) with an additional cost related to the spectral gap of the interaction matrix. Finally, we discuss some extensions to the proposed setup such as proximal friendly functions, time-varying graphs, improvement of the condition numbers.