A novel scenario-adapted distributed signaling technique in the context of opportunistic communications is presented in this work. Each opportunistic user acquires locally sampled observations from the wireless environment to determine the occupied and available degrees-of-freedom (DoF). Due to sensing errors and locality of observations, a performance loss and inter-system interference arise from subspace uncertainties. Yet, we show that addressing the problem as a total least-squares (TLS) optimization, signaling patterns robust to subspace uncertainties can be designed. Furthermore, given the equivalence of minimum norm and TLS, the latter exhibits the interesting properties of linear predictors. Specifically, the rotationally invariance property is of paramount importance to guarantee the detectability by neighboring nodes. Albeit these advantages, end-to-end subspace uncertainties yield a performance loss that compromises both detectability and wireless environment's performance. To combat the latter, we tackle the distributed identification of the active subspace with and without side information about neighboring nodes' subspaces. An extensive simulation analysis highlights the performance of distributed concurrency schemes to achieve subspace agreement.
Multiagent reinforcement learning algorithms have not been widely adopted in large scale environments with many agents as they often scale poorly with the number of agents. Using mean field theory to aggregate agents has been proposed as a solution to this problem. However, almost all previous methods in this area make a strong assumption of a centralized system where all the agents in the environment learn the same policy and are effectively indistinguishable from each other. In this paper, we relax this assumption about indistinguishable agents and propose a new mean field system known as Decentralized Mean Field Games, where each agent can be quite different from others. All agents learn independent policies in a decentralized fashion, based on their local observations. We define a theoretical solution concept for this system and provide a fixed point guarantee for a Q-learning based algorithm in this system. A practical consequence of our approach is that we can address a `chicken-and-egg' problem in empirical mean field reinforcement learning algorithms. Further, we provide Q-learning and actor-critic algorithms that use the decentralized mean field learning approach and give stronger performances compared to common baselines in this area. In our setting, agents do not need to be clones of each other and learn in a fully decentralized fashion. Hence, for the first time, we show the application of mean field learning methods in fully competitive environments, large-scale continuous action space environments, and other environments with heterogeneous agents. Importantly, we also apply the mean field method in a ride-sharing problem using a real-world dataset. We propose a decentralized solution to this problem, which is more practical than existing centralized training methods.
Cell-free Massive MIMO systems consist of a large number of geographically distributed access points (APs) that serve users by coherent joint transmission. Downlink power allocation is important in these systems, to determine which APs should transmit to which users and with what power. If the system is implemented correctly, it can deliver a more uniform user performance than conventional cellular networks. To this end, previous works have shown how to perform system-wide max-min fairness power allocation when using maximum ratio precoding. In this paper, we first generalize this method to arbitrary precoding, and then train a neural network to perform approximately the same power allocation but with reduced computational complexity. Finally, we train one neural network per AP to mimic system-wide max-min fairness power allocation, but using only local information. By learning the structure of the local propagation environment, this method outperforms the state-of-the-art distributed power allocation method from the Cell-free Massive MIMO literature.
Blades manufactured through flank and point milling will likely exhibit geometric variability. Gauging the aerodynamic repercussions of such variability, prior to manufacturing a component, is challenging enough, let alone trying to predict what the amplified impact of any in-service degradation will be. While rules of thumb that govern the tolerance band can be devised based on expected boundary layer characteristics at known regions and levels of degradation, it remains a challenge to translate these insights into quantitative bounds for manufacturing. In this work, we tackle this challenge by leveraging ideas from dimension reduction to construct low-dimensional representations of aerodynamic performance metrics. These low-dimensional models can identify a subspace which contains designs that are invariant in performance -- the inactive subspace. By sampling within this subspace, we design techniques for drafting manufacturing tolerances and for quantifying whether a scanned component should be used or scrapped. We introduce the blade envelope as a computational manufacturing guide for a blade that is also amenable to qualitative visualizations. In this paper, the first of two parts, we discuss its underlying concept and detail its computational methodology, assuming one is interested only in the single objective of ensuring that the loss of all manufactured blades remains constant. To demonstrate the utility of our ideas we devise a series of computational experiments with the Von Karman Institute's LS89 turbine blade.
We present the design of a new passive communication method that does not rely on ambient or generated RF sources. Instead, we exploit the Johnson (thermal) noise generated by a resistor to transmit information bits wirelessly. By switching the load connected to an antenna between a resistor and open circuit, we can achieve data rates of up to 26bps and distances of up to 7.3 meters. This communication method is orders of magnitude less power consuming than conventional communication schemes and presents the opportunity to enable wireless communication in areas with a complete lack of connectivity.
Pearson's chi-squared test is widely used to test the goodness of fit between categorical data and a given discrete distribution function. When the number of sets of the categorical data, say $k$, is a fixed integer, Pearson's chi-squared test statistic converges in distribution to a chi-squared distribution with $k-1$ degrees of freedom when the sample size $n$ goes to infinity. In real applications, the number $k$ often changes with $n$ and may be even much larger than $n$. By using the martingale techniques, we prove that Pearson's chi-squared test statistic converges to the normal under quite general conditions. We also propose a new test statistic which is more powerful than chi-squared test statistic based on our simulation study. A real application to lottery data is provided to illustrate our methodology.
We consider two-stage robust optimization problems, which can be seen as games between a decision maker and an adversary. After the decision maker fixes part of the solution, the adversary chooses a scenario from a specified uncertainty set. Afterwards, the decision maker can react to this scenario by completing the partial first-stage solution to a full solution. We extend this classic setting by adding another adversary stage after the second decision-maker stage, which results in min-max-min-max problems, thus pushing two-stage settings further towards more general multi-stage problems. We focus on budgeted uncertainty sets and consider both the continuous and discrete case. For the former, we show that a wide range of robust combinatorial optimization problems can be decomposed into polynomially many subproblems, which can be solved in polynomial time for example in the case of (\textsc{representative}) \textsc{selection}. For the latter, we prove NP-hardness for a wide range of problems, but note that the special case where first- and second-stage adversarial costs are equal can remain solvable in polynomial time.
Autonomous vehicles use 3D sensors for perception. Cooperative perception enables vehicles to share sensor readings with each other to improve safety. Prior work in cooperative perception scales poorly even with infrastructure support. AutoCast enables scalable infrastructure-less cooperative perception using direct vehicle-to-vehicle communication. It carefully determines which objects to share based on positional relationships between traffic participants, and the time evolution of their trajectories. It coordinates vehicles and optimally schedules transmissions in a distributed fashion. Extensive evaluation results under different scenarios show that, unlike competing approaches, AutoCast can avoid crashes and near-misses which occur frequently without cooperative perception, its performance scales gracefully in dense traffic scenarios providing 2-4x visibility into safety critical objects compared to existing cooperative perception schemes, its transmission schedules can be completed on the real radio testbed, and its scheduling algorithm is near-optimal with negligible computation overhead.
We study the asymptotic normality of two estimators of the integrated volatility of volatility based on the Fourier methodology, which does not require the pre-estimation of the spot volatility. We show that the bias-corrected estimator reaches the optimal rate 1/4, while the estimator without bias-correction has a slower convergence rate and a smaller asymptotic variance. Additionally, we provide simulation results that support the theoretical asymptotic distribution of the rate-efficient estimator and show the accuracy of the Fourier estimator in comparison with a rate-optimal estimator based on the pre-estimation of the spot volatility. Finally, we reconstruct the daily volatility of volatility of the S&P500 and EUROSTOXX50 indices over long samples via the rate-optimal Fourier estimator and provide novel insight into the existence of stylized facts about its dynamics.
We consider a fully decentralized multi-player stochastic multi-armed bandit setting where the players cannot communicate with each other and can observe only their own actions and rewards. The environment may appear differently to different players, $\textit{i.e.}$, the reward distributions for a given arm are heterogeneous across players. In the case of a collision (when more than one player plays the same arm), we allow for the colliding players to receive non-zero rewards. The time-horizon $T$ for which the arms are played is \emph{not} known to the players. Within this setup, where the number of players is allowed to be greater than the number of arms, we present a policy that achieves near order-optimal expected regret of order $O(\log^{1 + \delta} T)$ for some $0 < \delta < 1$ over a time-horizon of duration $T$. This paper is accepted at IEEE Transactions on Information Theory.
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks. In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge and may severely deteriorate the generalization performance. In this paper, we investigate and identify the limitation of several decentralized optimization algorithms for different degrees of data heterogeneity. We propose a novel momentum-based method to mitigate this decentralized training difficulty. We show in extensive empirical experiments on various CV/NLP datasets (CIFAR-10, ImageNet, and AG News) and several network topologies (Ring and Social Network) that our method is much more robust to the heterogeneity of clients' data than other existing methods, by a significant improvement in test performance ($1\% \!-\! 20\%$). Our code is publicly available.