The short-time Fourier transform (STFT), or the discrete Gabor transform (DGT), has been extensively used in signal analysis and processing. Their properties are characterized by a window function. For signal processing, designing a special window called tight window is important because it is known to make DGT-domain processing robust to error. In this paper, we propose a method of designing tight windows that minimize the sidelobe energy. It is formulated as a constrained spectral concentration problem, and a Newton's method on an oblique manifold is derived to efficiently obtain a solution. Our numerical example showed that the proposed algorithm requires only several iterations to reach a stationary point.
Intelligent reflecting surface (IRS) has emerged as a cost-effective solution to enhance wireless communication performance via passive signal reflection. Existing works on IRS have mainly focused on investigating IRS's passive beamforming/reflection design to boost the communication rate for users assuming that their channel state information (CSI) is fully or partially known. However, how to exploit IRS to improve the wireless transmission reliability without any CSI, which is typical in high-mobility/delay-sensitive communication scenarios, remains largely open. In this paper, we study a new IRS-aided communication system with the IRS integrated to its aided access point (AP) to achieve both functions of transmit diversity and passive beamforming simultaneously. Specifically, we first show an interesting result that the IRS's passive beamforming gain in any direction is invariant to the common phase-shift applied to all of its reflecting elements. Accordingly, we design the common phase-shift of IRS elements to achieve transmit diversity at the AP side without the need of any CSI of the users. In addition, we propose a practical method for the users to estimate the CSI at the receiver side for information decoding. Meanwhile, we show that the conventional passive beamforming gain of IRS can be retained for the other users with their CSI known at the AP. Furthermore, we derive the asymptotic performance of both IRS-aided transmit diversity and passive beamforming in closed-form, by considering the large-scale IRS with an infinite number of elements. Numerical results validate our analysis and show the performance gains of the proposed IRS-aided simultaneous transmit diversity and passive beamforming scheme over other benchmark schemes.
Reed Muller (RM) codes are known for their good minimum distance. One can use their structure to construct polar-like codes with good distance properties by choosing the information set as the rows of the polarization matrix with the highest Hamming weight, instead of the most reliable synthetic channels. However, the information length options of RM codes are quite limited due to their specific structure. In this work, we present sufficient conditions to increase the information length by at least one bit for some underlying RM codes and in order to obtain pre-transformed polar-like codes with the same minimum distance than lower rate codes. Moreover, our findings are combined with the method presented in [1] to further reduce the number of minimum weight codewords. Numerical results show that the designed codes perform close to the meta-converse bound at short blocklengths and better than the polarized adjusted convolutional polar codes with the same parameters.
Let $\mathbf{X}$ be a random variable uniformly distributed on the discrete cube $\{ -1,1\} ^{n}$, and let $T_{\rho}$ be the noise operator acting on Boolean functions $f:\{ -1,1\} ^{n}\to\{ 0,1\} $, where $\rho\in[0,1]$ is the noise parameter, representing the correlation coefficient between each coordination of $\mathbf{X}$ and its noise-corrupted version. Given a convex function $\Phi$ and the mean $\mathbb{E}f(\mathbf{X})=a\in[0,1]$, which Boolean function $f$ maximizes the $\Phi$-stability $\mathbb{E}[\Phi(T_{\rho}f(\mathbf{X}))]$ of $f$? Special cases of this problem include the (symmetric and asymmetric) $\alpha$-stability problems and the "Most Informative Boolean Function" problem. In this paper, we provide several upper bounds for the maximal $\Phi$-stability. When specializing $\Phi$ to some particular forms, by these upper bounds, we partially resolve Mossel and O'Donnell's conjecture on $\alpha$-stability with $\alpha>2$, Li and M\'edard's conjecture on $\alpha$-stability with $1<\alpha<2$, and Courtade and Kumar's conjecture on the "Most Informative Boolean Function" which corresponds to a conjecture on $\alpha$-stability with $\alpha=1$. Our proofs are based on discrete Fourier analysis, optimization theory, and improvements of the Friedgut--Kalai--Naor (FKN) theorem. Our improvements of the FKN theorem are sharp or asymptotically sharp for certain cases.
In our previous work [SIAM J. Sci. Comput. 43(3) (2021) B784-B810], an accurate hyper-singular boundary integral equation method for dynamic poroelasticity in two dimensions has been developed. This work is devoted to studying the more complex and difficult three-dimensional problems with Neumann boundary condition and both the direct and indirect methods are adopted to construct combined boundary integral equations. The strongly-singular and hyper-singular integral operators are reformulated into compositions of weakly-singular integral operators and tangential-derivative operators, which allow us to prove the jump relations associated with the poroelastic layer potentials and boundary integral operators in a simple manner. Relying on both the investigated spectral properties of the strongly-singular operators, which indicate that the corresponding eigenvalues accumulate at three points whose values are only dependent on two Lam\'e constants, and the spectral properties of the Calder\'on relations of the poroelasticity, we propose low-GMRES-iteration regularized integral equations. Numerical examples are presented to demonstrate the accuracy and efficiency of the proposed methodology by means of a Chebyshev-based rectangular-polar solver.
Reliable probability estimation is of crucial importance in many real-world applications where there is inherent uncertainty, such as weather forecasting, medical prognosis, or collision avoidance in autonomous vehicles. Probability-estimation models are trained on observed outcomes (e.g. whether it has rained or not, or whether a patient has died or not), because the ground-truth probabilities of the events of interest are typically unknown. The problem is therefore analogous to binary classification, with the important difference that the objective is to estimate probabilities rather than predicting the specific outcome. The goal of this work is to investigate probability estimation from high-dimensional data using deep neural networks. There exist several methods to improve the probabilities generated by these models but they mostly focus on classification problems where the probabilities are related to model uncertainty. In the case of problems with inherent uncertainty, it is challenging to evaluate performance without access to ground-truth probabilities. To address this, we build a synthetic dataset to study and compare different computable metrics. We evaluate existing methods on the synthetic data as well as on three real-world probability estimation tasks, all of which involve inherent uncertainty. We also give a theoretical analysis of a model for high-dimensional probability estimation which reproduces several of the phenomena evinced in our experiments. Finally, we propose a new method for probability estimation using neural networks, which modifies the training process to promote output probabilities that are consistent with empirical probabilities computed from the data. The method outperforms existing approaches on most metrics on the simulated as well as real-world data.
In order for a robot to perform a task, several algorithms need to be executed, sometimes, simultaneously. Algorithms can be run either on the robot itself or, upon request, be performed on cloud infrastructure. The term cloud infrastructure is used to describe hardware, storage, abstracted resources, and network resources related to cloud computing. Depending on the decisions on where to execute the algorithms, the overall execution time and necessary memory space for the robot will change accordingly. The price of a robot depends, among other things, on its memory capacity and computational power. We answer the question of how to keep a given performance and use a cheaper robot (lower resources) by assigning computational tasks to the cloud infrastructure, depending on memory, computational power, and communication constraints. Also, for a fixed robot, our model provides a way to have optimal overall performance. We provide a general model for the optimal decision of algorithm allocation under certain constraints. We exemplify the model with simulation results. The main advantage of our model is that it provides an optimal task allocation simultaneously for memory and time.
In this paper, we propose Nesterov Accelerated Shuffling Gradient (NASG), a new algorithm for the convex finite-sum minimization problems. Our method integrates the traditional Nesterov's acceleration momentum with different shuffling sampling schemes. We show that our algorithm has an improved rate of $\mathcal{O}(1/T)$ using unified shuffling schemes, where $T$ is the number of epochs. This rate is better than that of any other shuffling gradient methods in convex regime. Our convergence analysis does not require an assumption on bounded domain or a bounded gradient condition. For randomized shuffling schemes, we improve the convergence bound further. When employing some initial condition, we show that our method converges faster near the small neighborhood of the solution. Numerical simulations demonstrate the efficiency of our algorithm.
We introduce a prior for the parameters of univariate continuous distributions, based on the Wasserstein information matrix, which is invariant under reparameterisations. We briefly discuss the links between the proposed prior with information geometry. We present several examples where we can either obtain this prior in closed-form, or propose a numerically tractable approximation for cases where the prior is not available in closed-form. Since this prior is improper in some cases, we present sufficient conditions for the propriety of the posterior distribution for general classes of models.
We consider the problem of estimating a continuous-time Gauss-Markov source process observed through a vector Gaussian channel with an adjustable channel gain matrix. For a given (generally time-varying) channel gain matrix, we provide formulas to compute (i) the mean-square estimation error attainable by the classical Kalman-Bucy filter, and (ii) the mutual information between the source process and its Kalman-Bucy estimate. We then formulate a novel "optimal channel gain control problem" where the objective is to control the channel gain matrix strategically to minimize the weighted sum of these two performance metrics. To develop insights into the optimal solution, we first consider the problem of controlling a time-varying channel gain over a finite time interval. A necessary optimality condition is derived based on Pontryagin's minimum principle. For a scalar system, we show that the optimal channel gain is a piece-wise constant signal with at most two switches. We also consider the problem of designing the optimal time-invariant gain to minimize the average cost over an infinite time horizon. A novel semidefinite programming (SDP) heuristic is proposed and the exactness of the solution is discussed.
Minimax problems have gained tremendous attentions across the optimization and machine learning community recently. In this paper, we introduce a new quasi-Newton method for minimax problems, which we call $J$-symmetric quasi-Newton method. The method is obtained by exploiting the $J$-symmetric structure of the second-order derivative of the objective function in minimax problem. We show that the Hessian estimation (as well as its inverse) can be updated by a rank-2 operation, and it turns out that the update rule is a natural generalization of the classic Powell symmetric Broyden (PSB) method from minimization problems to minimax problems. In theory, we show that our proposed quasi-Newton algorithm enjoys local Q-superlinear convergence to a desirable solution under standard regularity conditions. Furthermore, we introduce a trust-region variant of the algorithm that enjoys global R-superlinear convergence. Finally, we present numerical experiments that verify our theory and show the effectiveness of our proposed algorithms compared to Broyden's method and the extragradient method on three classes of minimax problems.