Precoding techniques are key to dealing with multiuser interference in the downlink of cell-free (CF) multiple-input multiple-output systems. However, these techniques rely on accurate estimates of the channel state information at the transmitter (CSIT), which is not possible to obtain in practical systems. As a result, precoders cannot handle interference as expected and the residual interference substantially degrades the performance of the system. To address this problem, CF systems require precoders that are robust to CSIT imperfections. In this paper, we propose novel robust precoding techniques to mitigate the effects of residual multiuser interference. To this end, we include a loading term that minimizes the effects of the imperfect CSIT in the optimization objective. We further derive robust precoders that employ clusters of users and access points to reduce the computational cost and the signaling load. Numerical experiments show that the proposed robust minimum mean-square error (MMSE) precoding techniques outperform the conventional MMSE precoder for various accuracy levels of CSIT estimates.
We consider a dense small cell (DSC) network where multi-antenna small cell base stations (SBSs) transmit data to single-antenna users over a shared frequency band. To enhance capacity, a state-of-the-art technique known as noncoherent joint transmission (JT) is applied, enabling users to receive data from multiple coordinated SBSs. However, the sum rate maximization problem with noncoherent JT is inherently nonconvex and NP-hard. While existing optimization-based noncoherent JT algorithms can provide near-optimal performance, they require global channel state information (CSI) and multiple iterations, which makes them difficult to be implemeted in DSC networks.To overcome these challenges, we first prove that the optimal beamforming structure is the same for both the power minimization problem and the sum rate maximization problem, and then mathematically derive the optimal beamforming structure for both problems by solving the power minimization problem.The optimal beamforming structure can effectively reduces the variable dimensions.By exploiting the optimal beamforming structure, we propose a deep deterministic policy gradient-based distributed noncoherent JT scheme to maximize the system sum rate.In the proposed scheme, each SBS utilizes global information for training and uses local CSI to determine beamforming vectors. Simulation results demonstrate that the proposed scheme achieves comparable performance with considerably lower computational complexity and information overhead compared to centralized iterative optimization-based techniques, making it more attractive for practical deployment.
The mixed truck-drone delivery systems have attracted increasing attention for last-mile logistics, but real-world complexities demand a shift from single-agent, fully connected graph models to multi-agent systems operating on actual road networks. We introduce the multi-agent flying sidekick traveling salesman problem (MA-FSTSP) on road networks, extending the single truck-drone model to multiple trucks, each carrying multiple drones while considering full road networks for truck restrictions and flexible drone routes. We propose a mixed-integer linear programming model and an efficient three-phase heuristic algorithm for this NP-hard problem. Our approach decomposes MA-FSTSP into manageable subproblems of one truck with multiple drones. Then, it computes the routes for trucks without drones in subproblems, which are used in the final phase as heuristics to help optimize drone and truck routes simultaneously. Extensive numerical experiments on Manhattan and Boston road networks demonstrate our algorithm's superior effectiveness and efficiency, significantly outperforming both column generation and variable neighborhood search baselines in solution quality and computation time. Notably, our approach scales to more than 300 customers within a 5-minute time limit, showcasing its potential for large-scale, real-world logistics applications.
Broadcast and consensus are most fundamental tasks in distributed computing. These tasks are particularly challenging in dynamic networks where communication across the network links may be unreliable, e.g., due to mobility or failures. Indeed, over the last years, researchers have derived several impossibility results and high time complexity lower bounds (i.e., linear in the number of nodes $n$) for these tasks, even for oblivious message adversaries where communication networks are rooted trees. However, such deterministic adversarial models may be overly conservative, as many processes in real-world settings are stochastic in nature rather than worst case. This paper initiates the study of broadcast and consensus on stochastic dynamic networks, introducing a randomized oblivious message adversary. Our model is reminiscent of the SI model in epidemics, however, revolving around trees (which renders the analysis harder due to the apparent lack of independence). In particular, we show that if information dissemination occurs along random rooted trees, broadcast and consensus complete fast with high probability, namely in logarithmic time. Our analysis proves the independence of a key variable, which enables a formal understanding of the dissemination process. More formally, for a network with $n$ nodes, we first consider the completely random case where in each round the communication network is chosen uniformly at random among rooted trees. We then introduce the notion of randomized oblivious message adversary, where in each round, an adversary can choose $k$ edges to appear in the communication network, and then a rooted tree is chosen uniformly at random among the set of all rooted trees that include these edges. We show that broadcast completes in $O(k+\log n)$ rounds, and that this it is also the case for consensus as long as $k \le 0.1n$.
An intrinsically causal approach to lifting factorization, called the Causal Complementation Algorithm, is developed for arbitrary two-channel perfect reconstruction FIR filter banks. This addresses an engineering shortcoming of the inherently noncausal strategy of Daubechies and Sweldens for factoring discrete wavelet transforms, which was based on the Extended Euclidean Algorithm for Laurent polynomials. The Causal Complementation Algorithm reproduces all lifting factorizations created by the causal version of the Euclidean Algorithm approach and generates additional causal factorizations, which are not obtainable via the causal Euclidean Algorithm, possessing degree-reducing properties that generalize those furnished by the Euclidean Algorithm. In lieu of the Euclidean Algorithm, the new approach employs Gaussian elimination in matrix polynomials using a slight generalization of polynomial long division. It is shown that certain polynomial degree-reducing conditions are both necessary and sufficient for a causal elementary matrix decomposition to be obtainable using the Causal Complementation Algorithm, yielding a formal definition of ``lifting factorization'' that was missing from the work of Daubechies and Sweldens.
Plasma instabilities are a major concern in plasma science, for applications ranging from particle accelerators to nuclear fusion reactors. In this work, we consider the possibility of controlling such instabilities by adding an external electric field to the Vlasov--Poisson equations. Our approach to determining the external electric field is based on conducting a linear analysis of the resulting equations. We show that it is possible to select external electric fields that completely suppress the plasma instabilities present in the system when the equilibrium distribution and the perturbation are known. In fact, the proposed strategy returns the plasma to its equilibrium with a rate that is faster than exponential in time. We further perform numerical simulations of the nonlinear two-stream and bump-on-tail instabilities to verify our theory and to compare the different strategies that we propose in this work.
As a crucial and intricate task in robotic minimally invasive surgery, reconstructing surgical scenes using stereo or monocular endoscopic video holds immense potential for clinical applications. NeRF-based techniques have recently garnered attention for the ability to reconstruct scenes implicitly. On the other hand, Gaussian splatting-based 3D-GS represents scenes explicitly using 3D Gaussians and projects them onto a 2D plane as a replacement for the complex volume rendering in NeRF. However, these methods face challenges regarding surgical scene reconstruction, such as slow inference, dynamic scenes, and surgical tool occlusion. This work explores and reviews state-of-the-art (SOTA) approaches, discussing their innovations and implementation principles. Furthermore, we replicate the models and conduct testing and evaluation on two datasets. The test results demonstrate that with advancements in these techniques, achieving real-time, high-quality reconstructions becomes feasible.
We present a space-time multigrid method based on tensor-product space-time finite element discretizations. The method is facilitated by the matrix-free capabilities of the {\ttfamily deal.II} library. It addresses both high-order continuous and discontinuous variational time discretizations with spatial finite element discretizations. The effectiveness of multigrid methods in large-scale stationary problems is well established. However, their application in the space-time context poses significant challenges, mainly due to the construction of suitable smoothers. To address these challenges, we develop a space-time cell-wise additive Schwarz smoother and demonstrate its effectiveness on the heat and acoustic wave equations. The matrix-free framework of the {\ttfamily deal.II} library supports various multigrid strategies, including $h$-, $p$-, and $hp$-refinement across spatial and temporal dimensions. Extensive empirical evidence, provided through scaling and convergence tests on high-performance computing platforms, demonstrate high performance on perturbed meshes and problems with heterogeneous and discontinuous coefficients. Throughputs of over a billion degrees of freedom per second are achieved on problems with more than a trillion global degrees of freedom. The results prove that the space-time multigrid method can effectively solve complex problems in high-fidelity simulations and show great potential for use in coupled problems.
We propose a novel method for user-to-user interference (UUI) mitigation in dynamic time-division duplex multiple-input multiple-output communication systems with multi-antenna users. Specifically, we consider the downlink data transmission in the presence of UUI caused by a user that simultaneously transmits in uplink. Our method introduces an overhead for estimation of the user-to-user channels by transmitting pilots from the uplink user to the downlink users. Each downlink user obtains a channel estimate that is used to design a combining matrix for UUI mitigation. We analytically derive an achievable spectral efficiency for the downlink transmission in the presence of UUI with our mitigation technique. Through numerical simulations, we show that our method can significantly improve the spectral efficiency performance in cases of heavy UUI.
The existence of representative datasets is a prerequisite of many successful artificial intelligence and machine learning models. However, the subsequent application of these models often involves scenarios that are inadequately represented in the data used for training. The reasons for this are manifold and range from time and cost constraints to ethical considerations. As a consequence, the reliable use of these models, especially in safety-critical applications, is a huge challenge. Leveraging additional, already existing sources of knowledge is key to overcome the limitations of purely data-driven approaches, and eventually to increase the generalization capability of these models. Furthermore, predictions that conform with knowledge are crucial for making trustworthy and safe decisions even in underrepresented scenarios. This work provides an overview of existing techniques and methods in the literature that combine data-based models with existing knowledge. The identified approaches are structured according to the categories integration, extraction and conformity. Special attention is given to applications in the field of autonomous driving.
We introduce a multi-task setup of identifying and classifying entities, relations, and coreference clusters in scientific articles. We create SciERC, a dataset that includes annotations for all three tasks and develop a unified framework called Scientific Information Extractor (SciIE) for with shared span representations. The multi-task setup reduces cascading errors between tasks and leverages cross-sentence relations through coreference links. Experiments show that our multi-task model outperforms previous models in scientific information extraction without using any domain-specific features. We further show that the framework supports construction of a scientific knowledge graph, which we use to analyze information in scientific literature.