We consider the problem of accurate channel estimation for OTFS based systems with few transmit/receive antennas, where additional sparsity due to large number of antennas is not a possibility. For such systems the sparsity of the effective delay-Doppler (DD) domain channel is adversely affected in the presence of channel path delay and Doppler shifts which are non-integer multiples of the delay and Doppler domain resolution. The sparsity is also adversely affected when practical transmit and receive pulses are used. In this paper we propose a Modified Maximum Likelihood Channel Estimation (M-MLE) method for OTFS based systems which exploits the fine delay and Doppler domain resolution of the OTFS modulated signal to decouple the joint estimation of the channel parameters (i.e., channel gain, delay and Doppler shift) of all channel paths into separate estimation of the channel parameters for each path. We further observe that with fine delay and Doppler domain resolution, the received DD domain signal along a particular channel path can be written as a product of a delay domain term and a Doppler domain term where the delay domain term is primarily dependent on the delay of this path and the Doppler domain term is primarily dependent on the Doppler shift of this path. This allows us to propose another method termed as the two-step method (TSE), where the joint two-dimensional estimation of the delay and Doppler shift of a particular path in the M-MLE method is further decoupled into two separate one-dimensional estimation for the delay and for the Doppler shift of that path. Simulations reveal that the proposed methods (M-MLE and TSE) achieve better channel estimation accuracy at lower complexity when compared to other known methods for accurate OTFS channel estimation.
Orthogonal time frequency space (OTFS) modulation has recently emerged as an effective waveform to tackle the linear time-varying channels. In OTFS literature, approximately constant channel gains for every group of samples within each OTFS block are assumed. This leads to limitations for OTFS on the maximum Doppler frequency that it can tolerate. Additionally, presence of cyclic prefix (CP) in OTFS signal limits the flexibility in adjusting its parameters to improve its robustness against channel time variations. Therefore, in this paper, we study the possibility of removing the CP overhead from OTFS and breaking its Doppler limitations through multiple antenna processing in the large antenna regime. We asymptotically analyze the performance of time-reversal maximum ratio combining (TR-MRC) for OTFS without CP. We show that doubly dispersive channel effects average out in the large antenna regime when the maximum Doppler shift is within OTFS limitations. However, for considerably large Doppler shifts exceeding OTFS limitations, a residual Doppler effect remains. Our asymptotic derivations reveal that this effect converges to scaling of the received symbols in delay dimension with the samples of a Bessel function that depends on the maximum Doppler shift. Hence, we propose a novel residual Doppler correction (RDC) windowing technique that can break the Doppler limitations of OTFS and lead to a performance close to that of the linear time-invariant channels. Finally, we confirm the validity of our claims through simulations.
The main result in this paper is an error estimate for interpolation biharmonic polysplines in an annulus $A\left( r_{1},r_{N}\right) $, with respect to a partition by concentric annular domains $A\left( r_{1} ,r_{2}\right) ,$ ...., $A\left( r_{N-1},r_{N}\right) ,$ for radii $0<r_{1}<....<r_{N}.$ The biharmonic polysplines interpolate a smooth function on the spheres $\left\vert x\right\vert =r_{j}$ for $j=1,...,N$ and satisfy natural boundary conditions for $\left\vert x\right\vert =r_{1}$ and $\left\vert x\right\vert =r_{N}.$ By analogy with a technique in one-dimensional spline theory established by C. de Boor, we base our proof on error estimates for harmonic interpolation splines with respect to the partition by the annuli $A\left( r_{j-1},r_{j}\right) $. For these estimates it is important to determine the smallest constant $c\left( \Omega\right) ,$ where $\Omega=A\left( r_{j-1},r_{j}\right) ,$ among all constants $c$ satisfying \[ \sup_{x\in\Omega}\left\vert f\left( x\right) \right\vert \leq c\sup _{x\in\Omega}\left\vert \Delta f\left( x\right) \right\vert \] for all $f\in C^{2}\left( \Omega\right) \cap C\left( \overline{\Omega }\right) $ vanishing on the boundary of the bounded domain $\Omega$ . In this paper we describe $c\left( \Omega\right) $ for an annulus $\Omega=A\left( r,R\right) $ and we will give the estimate \[ \min\{\frac{1}{2d},\frac{1}{8}\}\left( R-r\right) ^{2}\leq c\left( A\left( r,R\right) \right) \leq\max\{\frac{1}{2d},\frac{1}{8}\}\left( R-r\right) ^{2}% \] where $d$ is the dimension of the underlying space.
We develop an \textit{a posteriori} error analysis for the time of the first occurrence of an event, specifically, the time at which a functional of the solution to a partial differential equation (PDE) first achieves a threshold value on a given time interval. This novel quantity of interest (QoI) differs from classical QoIs which are modeled as bounded linear (or nonlinear) functionals. Taylor's theorem and an adjoint-based \textit{a posteriori} analysis is used to derive computable and accurate error estimates for semi-linear parabolic and hyperbolic PDEs. The accuracy of the error estimates is demonstrated through numerical solutions of the one-dimensional heat equation and linearized shallow water equations (SWE), representing parabolic and hyperbolic cases, respectively.
We propose a novel method for computing $p$-values based on nested sampling (NS) applied to the sampling space rather than the parameter space of the problem, in contrast to its usage in Bayesian computation. The computational cost of NS scales as $\log^2{1/p}$, which compares favorably to the $1/p$ scaling for Monte Carlo (MC) simulations. For significances greater than about $4\sigma$ in both a toy problem and a simplified resonance search, we show that NS requires orders of magnitude fewer simulations than ordinary MC estimates. This is particularly relevant for high-energy physics, which adopts a $5\sigma$ gold standard for discovery. We conclude with remarks on new connections between Bayesian and frequentist computation and possibilities for tuning NS implementations for still better performance in this setting.
Hamilton and Moitra (2021) showed that, in certain regimes, it is not possible to accelerate Riemannian gradient descent in the hyperbolic plane if we restrict ourselves to algorithms which make queries in a (large) bounded domain and which receive gradients and function values corrupted by a (small) amount of noise. We show that acceleration remains unachievable for any deterministic algorithm which receives exact gradient and function-value information (unbounded queries, no noise). Our results hold for the classes of strongly and nonstrongly geodesically convex functions, and for a large class of Hadamard manifolds including hyperbolic spaces and the symmetric space $\mathrm{SL}(n) / \mathrm{SO}(n)$ of positive definite $n \times n$ matrices of determinant one. This cements a surprising gap between the complexity of convex optimization and geodesically convex optimization: for hyperbolic spaces, Riemannian gradient descent is optimal on the class of smooth and and strongly geodesically convex functions, in the regime where the condition number scales with the radius of the optimization domain. The key idea for proving the lower bound consists of perturbing the hard functions of Hamilton and Moitra (2021) with sums of bump functions chosen by a resisting oracle.
Wireless links using massive MIMO transceivers are vital for next generation wireless communications networks networks. Precoding in Massive MIMO transmission requires accurate downlink channel state information (CSI). Many recent works have effectively applied deep learning (DL) to jointly train UE-side compression networks for delay domain CSI and a BS-side decoding scheme. Vitally, these works assume that the full delay domain CSI is available at the UE, but in reality, the UE must estimate the delay domain based on a limited number of frequency domain pilots. In this work, we propose a linear pilot-to-delay (P2D) estimator that transforms sparse frequency pilots to the truncated delay CSI. We show that the P2D estimator is accurate under frequency downsampling, and we demonstrate that the P2D estimate can be effectively utilized with existing autoencoder-based CSI estimation networks. In addition to accounting for pilot-based estimates of downlink CSI, we apply unrolled optimization networks to emulate iterative solutions to compressed sensing (CS), and we demonstrate better estimation performance than prior autoencoder-based DL networks. Finally, we investigate the efficacy of trainable CS networks for in a differential encoding network for time-varying CSI estimation, and we propose a new network, MarkovNet-ISTA-ENet, comprised of both a CS network for initial CSI estimation and multiple autoencoders to estimate the error terms. We demonstrate that this heterogeneous network has better asymptotic performance than networks comprised of only one type of network.
Freight carriers rely on tactical planning to design their service network to satisfy demand in a cost-effective way. For computational tractability, deterministic and cyclic Service Network Design (SND) formulations are used to solve large-scale problems. A central input is the periodic demand, that is, the demand expected to repeat in every period in the planning horizon. In practice, demand is predicted by a time series forecasting model and the periodic demand is the average of those forecasts. This is, however, only one of many possible mappings. The problem consisting in selecting this mapping has hitherto been overlooked in the literature. We propose to use the structure of the downstream decision-making problem to select a good mapping. For this purpose, we introduce a multilevel mathematical programming formulation that explicitly links the time series forecasts to the SND problem of interest. The solution is a periodic demand estimate that minimizes costs over the tactical planning horizon. We report results in an extensive empirical study of a large-scale application from the Canadian National Railway Company. They clearly show the importance of the periodic demand estimation problem. Indeed, the planning costs exhibit an important variation over different periodic demand estimates and using an estimate different from the mean forecast can lead to substantial cost reductions. Moreover, the costs associated with the periodic demand estimates based on forecasts were comparable to, or even better than those obtained using the mean of actual demand.
Moment methods are an important means of density estimation, but they are generally strongly dependent on the choice of feasible functions, which severely affects the performance. We propose a non-classical parameterization for density estimation using the sample moments, which does not require the choice of such functions. The parameterization is induced by the Kullback-Leibler distance, and the solution of it, which is proved to exist and be unique subject to simple prior that does not depend on data, can be obtained by convex optimization. Simulation results show the performance of the proposed estimator in estimating multi-modal densities which are mixtures of different types of functions.
The modeling of optical wave propagation in optical fiber is a task of fast and accurate solving the nonlinear Schr\"odinger equation (NLSE), and can enable the research progress and system design of optical fiber communications, which are the infrastructure of modern communication systems. Traditional modeling of fiber channels using the split-step Fourier method (SSFM) has long been regarded as challenging in long-haul wavelength division multiplexing (WDM) optical fiber communication systems because it is extremely time-consuming. Here we propose a linear-nonlinear feature decoupling distributed (FDD) waveform modeling scheme to model long-haul WDM fiber channel, where the channel linear effects are modelled by the NLSE-derived model-driven methods and the nonlinear effects are modelled by the data-driven deep learning methods. Meanwhile, the proposed scheme only focuses on one-span fiber distance fitting, and then recursively transmits the model to achieve the required transmission distance. The proposed modeling scheme is demonstrated to have high accuracy, high computing speeds, and robust generalization abilities for different optical launch powers, modulation formats, channel numbers and transmission distances. The total running time of FDD waveform modeling scheme for 41-channel 1040-km fiber transmission is only 3 minutes versus more than 2 hours using SSFM for each input condition, which achieves a 98% reduction in computing time. Considering the multi-round optimization by adjusting system parameters, the complexity reduction is significant. The results represent a remarkable improvement in nonlinear fiber modeling and open up novel perspectives for solution of NLSE-like partial differential equations and optical fiber physics problems.
There is growing interest in object detection in advanced driver assistance systems and autonomous robots and vehicles. To enable such innovative systems, we need faster object detection. In this work, we investigate the trade-off between accuracy and speed with domain-specific approximations, i.e. category-aware image size scaling and proposals scaling, for two state-of-the-art deep learning-based object detection meta-architectures. We study the effectiveness of applying approximation both statically and dynamically to understand the potential and the applicability of them. By conducting experiments on the ImageNet VID dataset, we show that domain-specific approximation has great potential to improve the speed of the system without deteriorating the accuracy of object detectors, i.e. up to 7.5x speedup for dynamic domain-specific approximation. To this end, we present our insights toward harvesting domain-specific approximation as well as devise a proof-of-concept runtime, AutoFocus, that exploits dynamic domain-specific approximation.