Given two high-dimensional Gaussians with the same mean, we prove a lower and an upper bound for their total variation distance, which are within a constant factor of one another.
We consider the problem of chance constrained optimization where it is sought to optimize a function and satisfy constraints, both of which are affected by uncertainties. The real world declinations of this problem are particularly challenging because of their inherent computational cost. To tackle such problems, we propose a new Bayesian optimization method. It applies to the situation where the uncertainty comes from some of the inputs, so that it becomes possible to define an acquisition criterion in the joint controlled-uncontrolled input space. The main contribution of this work is an acquisition criterion that accounts for both the average improvement in objective function and the constraint reliability. The criterion is derived following the Stepwise Uncertainty Reduction logic and its maximization provides both optimal controlled and uncontrolled parameters. Analytical expressions are given to efficiently calculate the criterion. Numerical studies on test functions are presented. It is found through experimental comparisons with alternative sampling criteria that the adequation between the sampling criterion and the problem contributes to the efficiency of the overall optimization. As a side result, an expression for the variance of the improvement is given.
We present a parallel algorithm for the fast Fourier transform (FFT) in higher dimensions. This algorithm generalizes the cyclic-to-cyclic one-dimensional parallel algorithm to a cyclic-to-cyclic multidimensional parallel algorithm while retaining the property of needing only a single all-to-all communication step. This is under the constraint that we use at most $\sqrt{N}$ processors for an FFT on an array with a total of $N$ elements, irrespective of the dimension $d$ or the shape of the array. The only assumption we make is that $N$ is sufficiently composite. Our algorithm starts and ends in the same data distribution. We present our multidimensional implementation FFTU which utilizes the sequential FFTW program for its local FFTs, and which can handle any dimension $d$. We obtain experimental results for $d\leq 5$ using MPI on up to 4096 cores of the supercomputer Snellius, comparing FFTU with the parallel FFTW program and with PFFT and heFFTe. These results show that FFTU is competitive with the state of the art and that it allows one to use a larger number of processors, while keeping communication limited to a single all-to-all operation. For arrays of size $1024^3$ and $64^5$, FFTU achieves a speedup of a factor 149 and 176, respectively, on 4096 processors.
In this article, we propose and study a stochastic preconditioned Douglas-Rachford splitting method to solve saddle-point problems which have separable dual variables. We prove the almost sure convergence of the iteration sequences in Hilbert spaces for a class of convexconcave and nonsmooth saddle-point problems. We also provide the sublinear convergence rate for the ergodic sequence with respect to the expectation of the restricted primal-dual gap functions. Numerical experiments show the high efficiency of the proposed stochastic preconditioned Douglas-Rachford splitting methods.
In this work, we propose an UAV-aided Integrated Access and Backhaul (IAB) system design offering 5G connectivity to ground users. UAV is integrated with a distributed unit (DU) acting as an aerial DU, which can provide 5G wireless backhaul access to a terrestrial central unit (CU). The CU-DU interface fully complies with the 3GPP defined F1 application protocol (F1AP). Such aerial DU can be instantiated and configured dynamically, tailoring to the network demands. The complete radio and access network solution is based on open-source software from OpenAirInterface (OAI) and off-the-shelf commercial 5G mobile terminals. Experimental results illustrate throughput gains and coverage extension brought by the aerial DU.
We propose a new method for estimating causal effects in longitudinal/panel data settings that we call generalized difference-in-differences. Our approach unifies two alternative approaches in these settings: ignorability estimators (e.g., synthetic controls) and difference-in-differences (DiD) estimators. We propose a new identifying assumption -- a stable bias assumption -- which generalizes the conditional parallel trends assumption in DiD, leading to the proposed generalized DiD framework. This change gives generalized DiD estimators the flexibility of ignorability estimators while maintaining the robustness to unobserved confounding of DiD. We also show how ignorability and DiD estimators are special cases of generalized DiD. We then propose influence-function based estimators of the observed data functional, allowing the use of double/debiased machine learning for estimation. We also show how generalized DiD easily extends to include clustered treatment assignment and staggered adoption settings, and we discuss how the framework can facilitate estimation of other treatment effects beyond the average treatment effect on the treated. Finally, we provide simulations which show that generalized DiD outperforms ignorability and DiD estimators when their identifying assumptions are not met, while being competitive with these special cases when their identifying assumptions are met.
In this paper, we introduce a causal low-latency low-complexity approach for binaural multichannel blind speaker separation in noisy reverberant conditions. The model, referred to as Group Communication Binaural Filter and Sum Network (GCBFSnet) predicts complex filters for filter-and-sum beamforming in the time-frequency domain. We apply Group Communication (GC), i.e., latent model variables are split into groups and processed with a shared sequence model with the aim of reducing the complexity of a simple model only containing one convolutional and one recurrent module. With GC we are able to reduce the size of the model by up to 83 % and the complexity up to 73 % compared to the model without GC, while mostly retaining performance. Even for the smallest model configuration, GCBFSnet matches the performance of a low-complexity TasNet baseline in most metrics despite the larger size and higher number of required operations of the baseline.
The satisfiability problem is NP-complete but there are subclasses where all the instances are satisfiable. For this, restrictions on the shape of the formula are made. Darman and D\"ocker show that the subclass MONOTONE $3$-SAT-($k$,1) with $k \geq 5$ proves to be NP-complete and pose the open question whether instances of MONOTONE $3$-SAT-(3,1) are satisfiable. This paper shows that all instances of MONOTONE $3$-SAT-(3,1) are satisfiable using the new concept of a color-structures.
We present fast simulation methods for the self-assembly of complex shapes in two dimensions. The shapes are modeled via a general boundary curve and interact via a standard volume term promoting overlap and an interpenetration penalty. To efficiently realize the Gibbs measure on the space of possible configurations we employ the hybrid Monte Carlo algorithm together with a careful use of signed distance functions for energy evaluation. Motivated by the self-assembly of identical coat proteins of the tobacco mosaic virus which assemble into a helical shell, we design a particular nonconvex 2D model shape and demonstrate its robust self-assembly into a unique final state. Our numerical experiments reveal two essential prerequisites for this self-assembly process: blocking and matching (i.e., local repulsion and attraction) of different parts of the boundary; and nonconvexity and handedness of the shape.
Semi-unification is the combination of first-order unification and first-order matching. The undecidability of semi-unification has been proven by Kfoury, Tiuryn, and Urzyczyn in the 1990s by Turing reduction from Turing machine immortality (existence of a diverging configuration). The particular Turing reduction is intricate, uses non-computational principles, and involves various intermediate models of computation. The present work gives a constructive many-one reduction from the Turing machine halting problem to semi-unification. This establishes RE-completeness of semi-unification under many-one reductions. Computability of the reduction function, constructivity of the argument, and correctness of the argument is witnessed by an axiom-free mechanization in the Coq proof assistant. Arguably, this serves as comprehensive, precise, and surveyable evidence for the result at hand. The mechanization is incorporated into the existing, well-maintained Coq library of undecidability proofs. Notably, a variant of Hooper's argument for the undecidability of Turing machine immortality is part of the mechanization.
Current approaches to generic segmentation start by creating a hierarchy of nested image partitions and then specifying a segmentation from it. Our first contribution is to describe several ways, most of them new, for specifying segmentations using the hierarchy elements. Then, we consider the best hierarchy-induced segmentation specified by a limited number of hierarchy elements. We focus on a common quality measure for binary segmentations, the Jaccard index (also known as IoU). Optimizing the Jaccard index is highly non-trivial, and yet we propose an efficient approach for doing exactly that. This way we get algorithm-independent upper bounds on the quality of any segmentation created from the hierarchy. We found that the obtainable segmentation quality varies significantly depending on the way that the segments are specified by the hierarchy elements, and that representing a segmentation with only a few hierarchy elements is often possible. (Code is available).