Superdirective array may achieve an array gain proportional to the square of the number of antennas $M^2$. In the early studies of superdirectivity, little research has been done from wireless communication point of view. To leverage superdirectivity for enhancing the spectral efficiency, this paper investigates multi-user communication systems with superdirective arrays. We first propose a field-coupling-aware (FCA) multi-user channel estimation method, which takes into account the antenna coupling effects. Aiming to maximize the power gain of the target user, we propose multi-user multipath superdirective precoding (SP) as an extension of our prior work on coupling-based superdirective beamforming. Furthermore, to reduce the inter-user interference, we propose interference-nulling superdirective precoding (INSP) as the optimal solution to maximize user power gains while eliminating interference. Then, by taking the ohmic loss into consideration, we further propose a regularized interference-nulling superdirective precoding (RINSP) method. Finally, we discuss the well-known narrow directivity bandwidth issue, and find that it is not a fundamental problem of superdirective arrays in multi-carrier communication systems. Simulation results show our proposed methods outperform the state-of-the-art methods significantly. Interestingly, in the multi-user scenario, an 18-antenna superdirective array can achieve up to a 9-fold increase of spectral efficiency compared to traditional multiple-input multiple-output (MIMO), while simultaneously reducing the array aperture by half.
Binary responses arise in a multitude of statistical problems, including binary classification, bioassay, current status data problems and sensitivity estimation. There has been an interest in such problems in the Bayesian nonparametrics community since the early 1970s, but inference given binary data is intractable for a wide range of modern simulation-based models, even when employing MCMC methods. Recently, Christensen (2023) introduced a novel simulation technique based on counting permutations, which can estimate both posterior distributions and marginal likelihoods for any model from which a random sample can be generated. However, the accompanying implementation of this technique struggles when the sample size is too large (n > 250). Here we present perms, a new implementation of said technique which is substantially faster and able to handle larger data problems than the original implementation. It is available both as an R package and a Python library. The basic usage of perms is illustrated via two simple examples: a tractable toy problem and a bioassay problem. A more complex example involving changepoint analysis is also considered. We also cover the details of the implementation and illustrate the computational speed gain of perms via a simple simulation study.
In Bayesian statistics, posterior contraction rates (PCRs) quantify the speed at which the posterior distribution concentrates on arbitrarily small neighborhoods of a true model, in a suitable way, as the sample size goes to infinity. In this paper, we develop a new approach to PCRs, with respect to strong norm distances on parameter spaces of functions. Critical to our approach is the combination of a local Lipschitz-continuity for the posterior distribution with a dynamic formulation of the Wasserstein distance, which allows to set forth an interesting connection between PCRs and some classical problems arising in mathematical analysis, probability and statistics, e.g., Laplace methods for approximating integrals, Sanov's large deviation principles in the Wasserstein distance, rates of convergence of mean Glivenko-Cantelli theorems, and estimates of weighted Poincar\'e-Wirtinger constants. We first present a theorem on PCRs for a model in the regular infinite-dimensional exponential family, which exploits sufficient statistics of the model, and then extend such a theorem to a general dominated model. These results rely on the development of novel techniques to evaluate Laplace integrals and weighted Poincar\'e-Wirtinger constants in infinite-dimension, which are of independent interest. The proposed approach is applied to the regular parametric model, the multinomial model, the finite-dimensional and the infinite-dimensional logistic-Gaussian model and the infinite-dimensional linear regression. In general, our approach leads to optimal PCRs in finite-dimensional models, whereas for infinite-dimensional models it is shown explicitly how the prior distribution affect PCRs.
This work considers Bayesian experimental design for the inverse boundary value problem of linear elasticity in a two-dimensional setting. The aim is to optimize the positions of compactly supported pressure activations on the boundary of the examined body in order to maximize the value of the resulting boundary deformations as data for the inverse problem of reconstructing the Lam\'e parameters inside the object. We resort to a linearized measurement model and adopt the framework of Bayesian experimental design, under the assumption that the prior and measurement noise distributions are mutually independent Gaussians. This enables the use of the standard Bayesian A-optimality criterion for deducing optimal positions for the pressure activations. The (second) derivatives of the boundary measurements with respect to the Lam\'e parameters and the positions of the boundary pressure activations are deduced to allow minimizing the corresponding objective function, i.e., the trace of the covariance matrix of the posterior distribution, by a gradient-based optimization algorithm. Two-dimensional numerical experiments are performed to demonstrate the functionality of our approach.
We propose a novel algorithm for solving the composite Federated Learning (FL) problem. This algorithm manages non-smooth regularization by strategically decoupling the proximal operator and communication, and addresses client drift without any assumptions about data similarity. Moreover, each worker uses local updates to reduce the communication frequency with the server and transmits only a $d$-dimensional vector per communication round. We prove that our algorithm converges linearly to a neighborhood of the optimal solution and demonstrate the superiority of our algorithm over state-of-the-art methods in numerical experiments.
We extend the error bounds from [SIMAX, Vol. 43, Iss. 2, pp. 787-811 (2022)] for the Lanczos method for matrix function approximation to the block algorithm. Numerical experiments suggest that our bounds are fairly robust to changing block size and have the potential for use as a practical stopping criteria. Further experiments work towards a better understanding of how certain hyperparameters should be chosen in order to maximize the quality of the error bounds, even in the previously studied block-size one case.
Recently, a stability theory has been developed to study the linear stability of modified Patankar--Runge--Kutta (MPRK) schemes. This stability theory provides sufficient conditions for a fixed point of an MPRK scheme to be stable as well as for the convergence of an MPRK scheme towards the steady state of the corresponding initial value problem, whereas the main assumption is that the initial value is sufficiently close to the steady state. Initially, numerical experiments in several publications indicated that these linear stability properties are not only local, but even global, as is the case for general linear methods. Recently, however, it was discovered that the linear stability of the MPDeC(8) scheme is indeed only local in nature. Our conjecture is that this is a result of negative Runge--Kutta (RK) parameters of MPDeC(8) and that linear stability is indeed global, if the RK parameters are nonnegative. To support this conjecture, we examine the family of MPRK22($\alpha$) methods with negative RK parameters and show that even among these methods there are methods for which the stability properties are only local. However, this local linear stability is not observed for MPRK22($\alpha$) schemes with nonnegative Runge-Kutta parameters.
Cloud computing and the evolution of management methodologies such as Lean Management or Agile entail a profound transformation in both system construction and maintenance approaches. These practices are encompassed within the term "DevOps." This descriptive approach to an information system or application, alongside the configuration of its constituent components, has necessitated the development of descriptive languages paired with specialized engines for automating systems administration tasks. Among these, the tandem of Ansible (engine) and YAML (descriptive language) stands out as the two most prevalent tools in the market, facing notable competition mainly from Terraform. The current document presents an inquiry into a solution for generating and managing Ansible YAML roles and playbooks, utilizing Generative LLMs (Language Models) to translate human descriptions into code. Our efforts are focused on identifying plausible directions and outlining the potential industrial applications. Note: For the purpose of this experiment, we have opted against the use of Ansible Lightspeed. This is due to its reliance on an IBM Watson model, for which we have not found any publicly available references. Comprehensive information regarding this remarkable technology can be found directly on our partner RedHat's website, //www.redhat.com/en/about/press-releases/red-hat-introduces-ansible-lightspeed-ai-driven-it-automation
Benefiting from the development of deep learning, text-to-speech (TTS) techniques using clean speech have achieved significant performance improvements. The data collected from real scenes often contains noise and generally needs to be denoised by speech enhancement models. Noise-robust TTS models are often trained using the enhanced speech, which thus suffer from speech distortion and background noise that affect the quality of the synthesized speech. Meanwhile, it was shown that self-supervised pre-trained models exhibit excellent noise robustness on many speech tasks, implying that the learned representation has a better tolerance for noise perturbations. In this work, we therefore explore pre-trained models to improve the noise robustness of TTS models. Based on HiFi-GAN, we first propose a representation-to-waveform vocoder, which aims to learn to map the representation of pre-trained models to the waveform. We then propose a text-to-representation FastSpeech2 model, which aims to learn to map text to pre-trained model representations. Experimental results on the LJSpeech and LibriTTS datasets show that our method outperforms those using speech enhancement methods in both subjective and objective metrics. Audio samples are available at: //zqs01.github.io/rep2wav.
A novel distributed algorithm is proposed for finite-time converging to a feasible consensus solution satisfying global optimality to a certain accuracy of the distributed robust convex optimization problem (DRCO) subject to bounded uncertainty under a uniformly strongly connected network. Firstly, a distributed lower bounding procedure is developed, which is based on an outer iterative approximation of the DRCO through the discretization of the compact uncertainty set into a finite number of points. Secondly, a distributed upper bounding procedure is proposed, which is based on iteratively approximating the DRCO by restricting the constraints right-hand side with a proper positive parameter and enforcing the compact uncertainty set at finitely many points. The lower and upper bounds of the global optimal objective for the DRCO are obtained from these two procedures. Thirdly, two distributed termination methods are proposed to make all agents stop updating simultaneously by exploring whether the gap between the upper and the lower bounds reaches a certain accuracy. Fourthly, it is proved that all the agents finite-time converge to a feasible consensus solution that satisfies global optimality within a certain accuracy. Finally, a numerical case study is included to illustrate the effectiveness of the distributed algorithm.
At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether this XP-delay algorithm parameterized by $d$ could be made FPT-delay parameterized by $d$ and the maximum degree $\Delta$, i.e., an algorithm with delay $f(d,\Delta)\cdot n^{O(1)}$ for some computable function $f$. Moreover, as a first step toward answering that question, they note that the same delay is open for the intimately related problem of listing all minimal dominating sets in graphs. In this paper, we answer the latter question in the affirmative.