In downlink, a base station (BS) with multiple transmit antennas applies zeroforcing beamforming to transmit to single-antenna mobile users in a cell. We propose the schemes that optimize transmit power and the number of bits for channel direction information (CDI) for all users to achieve the max-min signal-to-interference plus noise ratio (SINR) fairness. The optimal allocation can be obtained by a geometric program for both non-orthogonal multiple access (NOMA) and orthogonal multiple access (OMA). For NOMA, 2 users with highly correlated channels are paired and share the same transmit beamforming. In some small total-CDI rate regimes, we show that NOMA can outperform OMA by as much as 3 dB. The performance gain over OMA increases when the correlation-coefficient threshold for user pairing is set higher. To reduce computational complexity, we propose to allocate transmit power and CDI rate to groups of multiple users instead of individual users. The user grouping scheme is based on K-means over the user SINR. We also propose a progressive filling scheme that performs close to the optimum, but can reduce the computation time by almost 3 orders of magnitude in some numerical examples.
In this paper we prove convergence rates for time discretisation schemes for semi-linear stochastic evolution equations with additive or multiplicative Gaussian noise, where the leading operator $A$ is the generator of a strongly continuous semigroup $S$ on a Hilbert space $X$, and the focus is on non-parabolic problems. The main results are optimal bounds for the uniform strong error $$\mathrm{E}_{k}^{\infty} := \Big(\mathbb{E} \sup_{j\in \{0, \ldots, N_k\}} \|U(t_j) - U^j\|^p\Big)^{1/p},$$ where $p \in [2,\infty)$, $U$ is the mild solution, $U^j$ is obtained from a time discretisation scheme, $k$ is the step size, and $N_k = T/k$. The usual schemes such as splitting/exponential Euler, implicit Euler, and Crank-Nicolson, etc.\ are included as special cases. Under conditions on the nonlinearity and the noise we show - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (linear equation, additive noise, general $S$); - $\mathrm{E}_{k}^{\infty}\lesssim \sqrt{k} \log(T/k)$ (nonlinear equation, multiplicative noise, contractive $S$); - $\mathrm{E}_{k}^{\infty}\lesssim k \log(T/k)$ (nonlinear wave equation, multiplicative noise). The logarithmic factor can be removed if the splitting scheme is used with a (quasi)-contractive $S$. The obtained bounds coincide with the optimal bounds for SDEs. Most of the existing literature is concerned with bounds for the simpler pointwise strong error $$\mathrm{E}_k:=\bigg(\sup_{j\in \{0,\ldots,N_k\}}\mathbb{E} \|U(t_j) - U^{j}\|^p\bigg)^{1/p}.$$ Applications to Maxwell equations, Schr\"odinger equations, and wave equations are included. For these equations our results improve and reprove several existing results with a unified method.
In the 1950s Horace Barlow and Fred Attneave suggested a connection between sensory systems and how they are adapted to the environment: early vision evolved to maximise the information it conveys about incoming signals. Following Shannon's definition, this information was described using the probability of the images taken from natural scenes. Previously, direct accurate predictions of image probabilities were not possible due to computational limitations. Despite the exploration of this idea being indirect, mainly based on oversimplified models of the image density or on system design methods, these methods had success in reproducing a wide range of physiological and psychophysical phenomena. In this paper, we directly evaluate the probability of natural images and analyse how it may determine perceptual sensitivity. We employ image quality metrics that correlate well with human opinion as a surrogate of human vision, and an advanced generative model to directly estimate the probability. Specifically, we analyse how the sensitivity of full-reference image quality metrics can be predicted from quantities derived directly from the probability distribution of natural images. First, we compute the mutual information between a wide range of probability surrogates and the sensitivity of the metrics and find that the most influential factor is the probability of the noisy image. Then we explore how these probability surrogates can be combined using a simple model to predict the metric sensitivity, giving an upper bound for the correlation of 0.85 between the model predictions and the actual perceptual sensitivity. Finally, we explore how to combine the probability surrogates using simple expressions, and obtain two functional forms (using one or two surrogates) that can be used to predict the sensitivity of the human visual system given a particular pair of images.
In this paper, we leverage polar codes and the well-established channel polarization to design capacity-achieving codes with a certain constraint on the weights of all the columns in the generator matrix (GM) while having a low-complexity decoding algorithm. We first show that given a binary-input memoryless symmetric (BMS) channel $W$ and a constant $s \in (0, 1]$, there exists a polarization kernel such that the corresponding polar code is capacity-achieving with the \textit{rate of polarization} $s/2$, and the GM column weights being bounded from above by $N^s$. To improve the sparsity versus error rate trade-off, we devise a column-splitting algorithm and two coding schemes for BEC and then for general BMS channels. The \textit{polar-based} codes generated by the two schemes inherit several fundamental properties of polar codes with the original $2 \times 2$ kernel including the decay in error probability, decoding complexity, and the capacity-achieving property. Furthermore, they demonstrate the additional property that their GM column weights are bounded from above sublinearly in $N$, while the original polar codes have some column weights that are linear in $N$. In particular, for any BEC and $\beta <0.5$, the existence of a sequence of capacity-achieving polar-based codes where all the GM column weights are bounded from above by $N^\lambda$ with $\lambda \approx 0.585$, and with the error probability bounded by $O(2^{-N^{\beta}} )$ under a decoder with complexity $O(N\log N)$, is shown. The existence of similar capacity-achieving polar-based codes with the same decoding complexity is shown for any BMS channel and $\beta <0.5$ with $\lambda \approx 0.631$.
In this paper, we investigate the downlink power adaptation for the suborbital node in suborbital-ground communication systems under extremely high reliability and ultra-low latency communications requirements, which can be formulated as a power threshold-minimization problem. Specifically, the interference from satellites is modeled as an accumulation of stochastic point processes on different orbit planes, and hybrid beamforming (HBF) is considered. On the other hand, to ensure the Quality of Service (QoS) constraints, the finite blocklength regime is adopted. Numerical results show that the transmit power required by the suborbital node decreases as the elevation angle increases at the receiving station.
The matrix sensing problem is an important low-rank optimization problem that has found a wide range of applications, such as matrix completion, phase synchornization/retrieval, robust PCA, and power system state estimation. In this work, we focus on the general matrix sensing problem with linear measurements that are corrupted by random noise. We investigate the scenario where the search rank $r$ is equal to the true rank $r^*$ of the unknown ground truth (the exact parametrized case), as well as the scenario where $r$ is greater than $r^*$ (the overparametrized case). We quantify the role of the restricted isometry property (RIP) in shaping the landscape of the non-convex factorized formulation and assisting with the success of local search algorithms. First, we develop a global guarantee on the maximum distance between an arbitrary local minimizer of the non-convex problem and the ground truth under the assumption that the RIP constant is smaller than $1/(1+\sqrt{r^*/r})$. We then present a local guarantee for problems with an arbitrary RIP constant, which states that any local minimizer is either considerably close to the ground truth or far away from it. More importantly, we prove that this noisy, overparametrized problem exhibits the strict saddle property, which leads to the global convergence of perturbed gradient descent algorithm in polynomial time. The results of this work provide a comprehensive understanding of the geometric landscape of the matrix sensing problem in the noisy and overparametrized regime.
In this paper we study multi-robot path planning for persistent monitoring tasks. We consider the case where robots have a limited battery capacity with a discharge time $D$. We represent the areas to be monitored as the vertices of a weighted graph. For each vertex, there is a constraint on the maximum allowable time between robot visits, called the latency. The objective is to find the minimum number of robots that can satisfy these latency constraints while also ensuring that the robots periodically charge at a recharging depot. The decision version of this problem is known to be PSPACE-complete. We present a $O(\frac{\log D}{\log \log D}\log \rho)$ approximation algorithm for the problem where $\rho$ is the ratio of the maximum and the minimum latency constraints. We also present an orienteering based heuristic to solve the problem and show empirically that it typically provides higher quality solutions than the approximation algorithm. We extend our results to provide an algorithm for the problem of minimizing the maximum weighted latency given a fixed number of robots. We evaluate our algorithms on large problem instances in a patrolling scenario and in a wildfire monitoring application. We also compare the algorithms with an existing solver on benchmark instances.
Intelligent reflecting surface (IRS) is envisioned to become a key technology for the upcoming six-generation (6G) wireless system due to its potential of reaping high performance in a power-efficient and cost-efficient way. With its disruptive capability and hardware constraint, the integration of IRS imposes some fundamental particularities on the coordination of multi-user signal transmission. Consequently, the conventional orthogonal and non-orthogonal multiple-access schemes are hard to directly apply because of the joint optimization of active beamforming at the base station and passive reflection at the IRS. Relying on an alternating optimization method, we develop novel schemes for efficient multiple access in IRS-aided multi-user multi-antenna systems in this paper. Achievable performance in terms of the sum spectral efficiency is theoretically analyzed. A comprehensive comparison of different schemes and configurations is conducted through Monte-Carlo simulations to clarify which scheme is favorable for this emerging 6G paradigm.
When solving compressible multi-material flow problems, an unresolved challenge is the computation of advective fluxes across material interfaces that separate drastically different thermodynamic states and relations. A popular idea in this regard is to locally construct bimaterial Riemann problems, and to apply their exact solutions in flux computation. For general equations of state, however, finding the exact solution of a Riemann problem is expensive as it requires nested loops. Multiplied by the large number of Riemann problems constructed during a simulation, the computational cost often becomes prohibitive. The work presented in this paper aims to accelerate the solution of bimaterial Riemann problems without introducing approximations or offline precomputation tasks. The basic idea is to exploit some special properties of the Riemann problem equations, and to recycle previous solutions as much as possible. Following this idea, four acceleration methods are developed, including (1) a change of integration variable through rarefaction fans, (2) storing and reusing integration trajectory data, (3) step size adaptation, and (4) constructing an R-tree on the fly to generate initial guesses. The performance of these acceleration methods are assessed using four example problems in underwater explosion, laser-induced cavitation, and hypervelocity impact. These problems exhibit strong shock waves, large interface deformation, contact of multiple (>2) interfaces, and interaction between gases and condensed matters. In these challenging cases, the solution of bimaterial Riemann problems is accelerated by 37 to 83 times. As a result, the total cost of advective flux computation, which includes the exact Riemann problem solution at material interfaces and the numerical flux calculation over the entire computational domain, is accelerated by 18 to 79 times.
Cosmological parameters encoding our understanding of the expansion history of the Universe can be constrained by the accurate estimation of time delays arising in gravitationally lensed systems. We propose TD-CARMA, a Bayesian method to estimate cosmological time delays by modelling the observed and irregularly sampled light curves as realizations of a Continuous Auto-Regressive Moving Average (CARMA) process. Our model accounts for heteroskedastic measurement errors and microlensing, an additional source of independent extrinsic long-term variability in the source brightness. The semi-separable structure of the CARMA covariance matrix allows for fast and scalable likelihood computation using Gaussian Process modeling. We obtain a sample from the joint posterior distribution of the model parameters using a nested sampling approach. This allows for ``painless'' Bayesian Computation, dealing with the expected multi-modality of the posterior distribution in a straightforward manner and not requiring the specification of starting values or an initial guess for the time delay, unlike existing methods. In addition, the proposed sampling procedure automatically evaluates the Bayesian evidence, allowing us to perform principled Bayesian model selection. TD-CARMA is parsimonious, and typically includes no more than a dozen unknown parameters. We apply TD-CARMA to six doubly lensed quasars HS 2209+1914, SDSS J1001+5027, SDSS J1206+4332, SDSS J1515+1511, SDSS J1455+1447, SDSS J1349+1227, estimating their time delays as $-21.96 \pm 1.448$, $120.93 \pm 1.015$, $111.51 \pm 1.452$, $210.80 \pm 2.18$, $45.36 \pm 1.93$ and $432.05 \pm 1.950$ respectively. These estimates are consistent with those derived in the relevant literature, but are typically two to four times more precise.
The Internet of Things (IoT) system generates massive high-speed temporally correlated streaming data and is often connected with online inference tasks under computational or energy constraints. Online analysis of these streaming time series data often faces a trade-off between statistical efficiency and computational cost. One important approach to balance this trade-off is sampling, where only a small portion of the sample is selected for the model fitting and update. Motivated by the demands of dynamic relationship analysis of IoT system, we study the data-dependent sample selection and online inference problem for a multi-dimensional streaming time series, aiming to provide low-cost real-time analysis of high-speed power grid electricity consumption data. Inspired by D-optimality criterion in design of experiments, we propose a class of online data reduction methods that achieve an optimal sampling criterion and improve the computational efficiency of the online analysis. We show that the optimal solution amounts to a strategy that is a mixture of Bernoulli sampling and leverage score sampling. The leverage score sampling involves auxiliary estimations that have a computational advantage over recursive least squares updates. Theoretical properties of the auxiliary estimations involved are also discussed. When applied to European power grid consumption data, the proposed leverage score based sampling methods outperform the benchmark sampling method in online estimation and prediction. The general applicability of the sampling-assisted online estimation method is assessed via simulation studies.