In this work we employ importance sampling (IS) techniques to track a small over-threshold probability of a running maximum associated with the solution of a stochastic differential equation (SDE) within the framework of ensemble Kalman filtering (EnKF). Between two observation times of the EnKF, we propose to use IS with respect to the initial condition of the SDE, IS with respect to the Wiener process via a stochastic optimal control formulation, and combined IS with respect to both initial condition and Wiener process. Both IS strategies require the approximation of the solution of Kolmogorov Backward equation (KBE) with boundary conditions. In multidimensional settings, we employ a Markovian projection dimension reduction technique to obtain an approximation of the solution of the KBE by just solving a one dimensional PDE. The proposed ideas are tested on two illustrative examples: Double Well SDE and Langevin dynamics, and showcase a significant variance reduction compared to the standard Monte Carlo method and another sampling-based IS technique, namely, multilevel cross entropy.
We develop a multiscale scanning method to find anomalies in a $d$-dimensional random field in the presence of nuisance parameters. This covers the common situation that either the baseline-level or additional parameters such as the variance are unknown and have to be estimated from the data. We argue that state of the art approaches to determine asymptotically correct critical values for multiscale scanning statistics will in general fail when such parameters are naively replaced by plug-in estimators. Opposed to this, we suggest to estimate the nuisance parameters on the largest scale and to use (only) smaller scales for multiscale scanning. We prove a uniform invariance principle for the resulting adjusted multiscale statistic (AMS), which is widely applicable and provides a computationally feasible way to simulate asymptotically correct critical values. We illustrate the implications of our theoretical results in a simulation study and in a real data example from super-resolution STED microscopy. This allows us to identify interesting regions inside a specimen in a pre-scan with controlled family-wise error rate.
Modern regression applications can involve hundreds or thousands of variables which motivates the use of variable selection methods. Bayesian variable selection defines a posterior distribution on the possible subsets of the variables (which are usually termed models) to express uncertainty about which variables are strongly linked to the response. This can be used to provide Bayesian model averaged predictions or inference, and to understand the relative importance of different variables. However, there has been little work on meaningful representations of this uncertainty beyond first order summaries. We introduce Cartesian credible sets to address this gap. The elements of these sets are formed by concatenating sub-models defined on each block of a partition of the variables. Investigating these sub-models allow us to understand whether the models in the Cartesian credible set always/never/sometimes include a particular variable or group of variables and provide a useful summary of model uncertainty. We introduce methods to find these sets that emphasize ease of understanding. The potential of the method is illustrated on regression problems with both small and large numbers of variables.
The fine-grained complexity of computing the Fr\'echet distance has been a topic of much recent work, starting with the quadratic SETH-based conditional lower bound by Bringmann from 2014. Subsequent work established largely the same complexity lower bounds for the Fr\'echet distance in 1D. However, the imbalanced case, which was shown by Bringmann to be tight in dimensions $d\geq 2$, was still left open. Filling in this gap, we show that a faster algorithm for the Fr\'echet distance in the imbalanced case is possible: Given two 1-dimensional curves of complexity $n$ and $n^{\alpha}$ for some $\alpha \in (0,1)$, we can compute their Fr\'echet distance in $O(n^{2\alpha} \log^2 n + n \log n)$ time. This rules out a conditional lower bound of the form $O((nm)^{1-\epsilon})$ that Bringmann showed for $d \geq 2$ and any $\varepsilon>0$ in turn showing a strict separation with the setting $d=1$. At the heart of our approach lies a data structure that stores a 1-dimensional curve $P$ of complexity $n$, and supports queries with a curve $Q$ of complexity~$m$ for the continuous Fr\'echet distance between $P$ and $Q$. The data structure has size in $\mathcal{O}(n\log n)$ and uses query time in $\mathcal{O}(m^2 \log^2 n)$. Our proof uses a key lemma that is based on the concept of visiting orders and may be of independent interest. We demonstrate this by substantially simplifying the correctness proof of a clustering algorithm by Driemel, Krivo\v{s}ija and Sohler from 2015.
We present a novel family of particle discretisation methods for the nonlinear Landau collision operator. We exploit the metriplectic structure underlying the Vlasov-Maxwell-Landau system in order to obtain disretisation schemes that automatically preserve mass, momentum, and energy, warrant monotonic dissipation of entropy, and are thus guaranteed to respect the laws of thermodynamics. In contrast to recent works that used radial basis functions and similar methods for regularisation, here we use an auxiliary spline or finite element representation of the distribution function to this end. Discrete gradient methods are employed to guarantee the aforementioned properties in the time discrete domain as well.
Bayesian sampling is an important task in statistics and machine learning. Over the past decade, many ensemble-type sampling methods have been proposed. In contrast to the classical Markov chain Monte Carlo methods, these new methods deploy a large number of interactive samples, and the communication between these samples is crucial in speeding up the convergence. To justify the validity of these sampling strategies, the concept of interacting particles naturally calls for the mean-field theory. The theory establishes a correspondence between particle interactions encoded in a set of coupled ODEs/SDEs and a PDE that characterizes the evolution of the underlying distribution. This bridges numerical algorithms with the PDE theory used to show convergence in time. Many mathematical machineries are developed to provide the mean-field analysis, and we showcase two such examples: The coupling method and the compactness argument built upon the martingale strategy. The former has been deployed to show the convergence of ensemble Kalman sampler and ensemble Kalman inversion, and the latter will be shown to be immensely powerful in proving the validity of the Vlasov-Boltzmann simulator.
Imaging through fog significantly impacts fields such as object detection and recognition. In conditions of extremely low visibility, essential image information can be obscured, rendering standard extraction methods ineffective. Traditional digital processing techniques, such as histogram stretching, aim to mitigate fog effects by enhancing object light contrast diminished by atmospheric scattering. However, these methods often experience reduce effectiveness under inhomogeneous illumination. This paper introduces a novel approach that adaptively filters background illumination under extremely low visibility and preserve only the essential signal information. Additionally, we employ a visual optimization strategy based on image gradients to eliminate grayscale banding. Finally, the image is transformed to achieve high contrast and maintain fidelity to the original information through maximum histogram equalization. Our proposed method significantly enhances signal clarity in conditions of extremely low visibility and outperforms existing algorithms.
We describe a protocol to study text-to-video retrieval training with unlabeled videos, where we assume (i) no access to labels for any videos, i.e., no access to the set of ground-truth captions, but (ii) access to labeled images in the form of text. Using image expert models is a realistic scenario given that annotating images is cheaper therefore scalable, in contrast to expensive video labeling schemes. Recently, zero-shot image experts such as CLIP have established a new strong baseline for video understanding tasks. In this paper, we make use of this progress and instantiate the image experts from two types of models: a text-to-image retrieval model to provide an initial backbone, and image captioning models to provide supervision signal into unlabeled videos. We show that automatically labeling video frames with image captioning allows text-to-video retrieval training. This process adapts the features to the target domain at no manual annotation cost, consequently outperforming the strong zero-shot CLIP baseline. During training, we sample captions from multiple video frames that best match the visual content, and perform a temporal pooling over frame representations by scoring frames according to their relevance to each caption. We conduct extensive ablations to provide insights and demonstrate the effectiveness of this simple framework by outperforming the CLIP zero-shot baselines on text-to-video retrieval on three standard datasets, namely ActivityNet, MSR-VTT, and MSVD.
Logistic regression is widely used in many areas of knowledge. Several works compare the performance of lasso and maximum likelihood estimation in logistic regression. However, part of these works do not perform simulation studies and the remaining ones do not consider scenarios in which the ratio of the number of covariates to sample size is high. In this work, we compare the discrimination performance of lasso and maximum likelihood estimation in logistic regression using simulation studies and applications. Variable selection is done both by lasso and by stepwise when maximum likelihood estimation is used. We consider a wide range of values for the ratio of the number of covariates to sample size. The main conclusion of the work is that lasso has a better discrimination performance than maximum likelihood estimation when the ratio of the number of covariates to sample size is high.
Weak coin flipping is the cryptographic task where Alice and Bob remotely flip a coin but want opposite outcomes. This work studies this task in the device-independent regime where Alice and Bob neither trust each other, nor their quantum devices. The best protocol was devised over a decade ago by Silman, Chailloux, Aharon, Kerenidis, Pironio, and Massar with bias $\varepsilon \approx 0.33664$, where the bias is a commonly adopted security measure for coin flipping protocols. This work presents two techniques to lower the bias of such protocols, namely self-testing and abort-phobic compositions. We apply these techniques to the SCAKPM '11 protocol above and, assuming a continuity conjecture, lower the bias to $\varepsilon \approx 0.29104$. We believe that these techniques could be useful in the design of device-independent protocols for a variety of other tasks. Independently of weak coin flipping, en route to our results, we show how one can test $n-1$ out of $n$ devices, and estimate the performance of the remaining device, for later use in the protocol. The proof uses linear programming and, due to its generality, may find applications elsewhere.
The uncertainty-penalized information criterion (UBIC) has been proposed as a new model-selection criterion for data-driven partial differential equation (PDE) discovery. In this paper, we show that using the UBIC is equivalent to employing the conventional BIC to a set of overparameterized models derived from the potential regression models of different complexity measures. The result indicates that the asymptotic property of the UBIC and BIC holds indifferently.