Let $(X_t)$ be a reflected diffusion process in a bounded convex domain in $\mathbb R^d$, solving the stochastic differential equation $$dX_t = \nabla f(X_t) dt + \sqrt{2f (X_t)} dW_t, ~t \ge 0,$$ with $W_t$ a $d$-dimensional Brownian motion. The data $X_0, X_D, \dots, X_{ND}$ consist of discrete measurements and the time interval $D$ between consecutive observations is fixed so that one cannot `zoom' into the observed path of the process. The goal is to solve the non-linear inverse problem of inferring the diffusivity $f$ and the associated transition operator $P_{t,f}$. We prove injectivity theorems and stability estimates for the maps $f \mapsto P_{t,f} \mapsto P_{D,f}, t<D$. Using these estimates we then establish the statistical consistency of a class of Bayesian algorithms based on Gaussian process priors for the infinite-dimensional parameter $f$, and show optimality of some of the convergence rates obtained. We discuss an underlying relationship between `fast convergence' and the `hot spots' conjecture from spectral geometry.
This note complements the upcoming paper "One-Way Ticket to Las Vegas and the Quantum Adversary" by Belovs and Yolcu, to be presented at QIP 2023. I develop the ideas behind the adversary bound - universal algorithm duality therein in a different form, using the same perspective as Barnum-Saks-Szegedy in which query algorithms are defined as sequences of feasible reduced density matrices rather than sequences of unitaries. This form may be faster to understand for a general quantum information audience: It avoids defining the "unidirectional relative $\gamma_{2}$-bound" and relating it to query algorithms explicitly. This proof is also more general because the lower bound (and universal query algorithm) apply to a class of optimal control problems rather than just query problems. That is in addition to the advantages to be discussed in Belovs-Yolcu, namely the more elementary algorithm and correctness proof that avoids phase estimation and spectral analysis, allows for limited treatment of noise, and removes another $\Theta(\log(1/\epsilon))$ factor from the runtime compared to the previous discrete-time algorithm.
This manuscript makes two contributions to the field of change-point detection. In a generalchange-point setting, we provide a generic algorithm for aggregating local homogeneity testsinto an estimator of change-points in a time series. Interestingly, we establish that the errorrates of the collection of tests directly translate into detection properties of the change-pointestimator. This generic scheme is then applied to various problems including covariance change-point detection, nonparametric change-point detection and sparse multivariate mean change-point detection. For the latter, we derive minimax optimal rates that are adaptive to theunknown sparsity and to the distance between change-points when the noise is Gaussian. Forsub-Gaussian noise, we introduce a variant that is optimal in almost all sparsity regimes.
As spatial audio is enjoying a surge in popularity, data-driven machine learning techniques that have been proven successful in other domains are increasingly used to process head-related transfer function measurements. However, these techniques require much data, whereas the existing datasets are ranging from tens to the low hundreds of datapoints. It therefore becomes attractive to combine multiple of these datasets, although they are measured under different conditions. In this paper, we first establish the common ground between a number of datasets, then we investigate potential pitfalls of mixing datasets. We perform a simple experiment to test the relevance of the remaining differences between datasets when applying machine learning techniques. Finally, we pinpoint the most relevant differences.
Quantum algorithms for optimization problems are of general interest. Despite recent progress in classical lower bounds for nonconvex optimization under different settings and quantum lower bounds for convex optimization, quantum lower bounds for nonconvex optimization are still widely open. In this paper, we conduct a systematic study of quantum query lower bounds on finding $\epsilon$-approximate stationary points of nonconvex functions, and we consider the following two important settings: 1) having access to $p$-th order derivatives; or 2) having access to stochastic gradients. The classical query lower bounds is $\Omega\big(\epsilon^{-\frac{1+p}{p}}\big)$ regarding the first setting, and $\Omega(\epsilon^{-4})$ regarding the second setting (or $\Omega(\epsilon^{-3})$ if the stochastic gradient function is mean-squared smooth). In this paper, we extend all these classical lower bounds to the quantum setting. They match the classical algorithmic results respectively, demonstrating that there is no quantum speedup for finding $\epsilon$-stationary points of nonconvex functions with $p$-th order derivative inputs or stochastic gradient inputs, whether with or without the mean-squared smoothness assumption. Technically, our quantum lower bounds are obtained by showing that the sequential nature of classical hard instances in all these settings also applies to quantum queries, preventing any quantum speedup other than revealing information of the stationary points sequentially.
This paper presents an algorithm to solve the Soft k-Means problem globally. Unlike Fuzzy c-Means, Soft k-Means (SkM) has a matrix factorization-type objective and has been shown to have a close relation with the popular probability decomposition-type clustering methods, e.g., Left Stochastic Clustering (LSC). Though some work has been done for solving the Soft k-Means problem, they usually use an alternating minimization scheme or the projected gradient descent method, which cannot guarantee global optimality since the non-convexity of SkM. In this paper, we present a sufficient condition for a feasible solution of Soft k-Means problem to be globally optimal and show the output of the proposed algorithm satisfies it. Moreover, for the Soft k-Means problem, we provide interesting discussions on stability, solutions non-uniqueness, and connection with LSC. Then, a new model, named Minimal Volume Soft k-Means (MVSkM), is proposed to address the solutions non-uniqueness issue. Finally, experimental results support our theoretical results.
We propose a new wavelet-based method for density estimation when the data are size-biased. More specifically, we consider a power of the density of interest, where this power exceeds 1/2. Warped wavelet bases are employed, where warping is attained by some continuous cumulative distribution function. A special case is the conventional orthonormal wavelet estimation, where the warping distribution is the standard continuous uniform. We show that both linear and nonlinear wavelet estimators are consistent, with optimal and/or near-optimal rates. Monte Carlo simulations are performed to compare four special settings which are easy to interpret in practice. An application with a real dataset on fatal traffic accidents involving alcohol illustrates the method. We observe that warped bases provide more flexible and superior estimates for both simulated and real data. Moreover, we find that estimating the power of a density (for instance, its square root) further improves the results.
The paper addresses state estimation for clock synchronization in the presence of factors affecting the quality of synchronization. Examples are temperature variations and delay asymmetry. These working conditions make synchronization a challenging problem in many wireless environments, such as Wireless Sensor Networks or WiFi. Dynamic state estimation is investigated as it is essential to overcome non-stationary noises. The two-way timing message exchange synchronization protocol has been taken as a reference. No a-priori assumptions are made on the stochastic environments and no temperature measurement is executed. The algorithms are unequivocally specified offline, without the need of tuning some parameters in dependence of the working conditions. The presented approach reveals to be robust to a large set of temperature variations, different delay distributions and levels of asymmetry in the transmission path.
We address the problem of integrating data from multiple observational and interventional studies to eventually compute counterfactuals in structural causal models. We derive a likelihood characterisation for the overall data that leads us to extend a previous EM-based algorithm from the case of a single study to that of multiple ones. The new algorithm learns to approximate the (unidentifiability) region of model parameters from such mixed data sources. On this basis, it delivers interval approximations to counterfactual results, which collapse to points in the identifiable case. The algorithm is very general, it works on semi-Markovian models with discrete variables and can compute any counterfactual. Moreover, it automatically determines if a problem is feasible (the parameter region being nonempty), which is a necessary step not to yield incorrect results. Systematic numerical experiments show the effectiveness and accuracy of the algorithm, while hinting at the benefits of integrating heterogeneous data to get informative bounds in case of unidentifiability.
This paper presents three classes of metalinear structures that abstract some of the properties of Hilbert spaces. Those structures include a binary relation that expresses orthogonality between elements and enables the definition of an operation that generalizes the projection operation in Hilbert spaces. The logic defined by the most general class has a unitary connective and two dual binary connectives that are neither commutative nor associative. It is a substructural logic of sequents in which the Exchange rule is extremely limited and Weakening is also restricted. This provides a logic for quantum measurements whose proof theory is attractive. A completeness result is proved. An additional property of the binary relation ensures that the structure satisfies the MacLane-Steinitz exchange property and is some kind of matroid. Preliminary results on richer structures based on a sort of real inner product that generalizes the Born factor of Quantum Physics are also presented.
Deep neural networks have achieved remarkable success in computer vision tasks. Existing neural networks mainly operate in the spatial domain with fixed input sizes. For practical applications, images are usually large and have to be downsampled to the predetermined input size of neural networks. Even though the downsampling operations reduce computation and the required communication bandwidth, it removes both redundant and salient information obliviously, which results in accuracy degradation. Inspired by digital signal processing theories, we analyze the spectral bias from the frequency perspective and propose a learning-based frequency selection method to identify the trivial frequency components which can be removed without accuracy loss. The proposed method of learning in the frequency domain leverages identical structures of the well-known neural networks, such as ResNet-50, MobileNetV2, and Mask R-CNN, while accepting the frequency-domain information as the input. Experiment results show that learning in the frequency domain with static channel selection can achieve higher accuracy than the conventional spatial downsampling approach and meanwhile further reduce the input data size. Specifically for ImageNet classification with the same input size, the proposed method achieves 1.41% and 0.66% top-1 accuracy improvements on ResNet-50 and MobileNetV2, respectively. Even with half input size, the proposed method still improves the top-1 accuracy on ResNet-50 by 1%. In addition, we observe a 0.8% average precision improvement on Mask R-CNN for instance segmentation on the COCO dataset.