Motivated by a heat radiative transport equation, we consider a particle undergoing collisions in a space-time domain and propose a method to sample its escape time, space and direction from the domain. The first step of the procedure is an estimation of how many elementary collisions is safe to take before chances of exiting the domain are too high; then these collisions are aggregated into a single movement. The method does not use any model nor any particular regime of parameters. We give theoretical results both under the normal approximation and without it and test the method on some benchmarks from the literature. The results confirm the theoretical predictions and show that the proposal is an efficient method to sample the escape distribution of the particle.
In this article, we develop comprehensive frequency domain methods for estimating and inferring the second-order structure of spatial point processes. The main element here is on utilizing the discrete Fourier transform (DFT) of the point pattern and its tapered counterpart. Under second-order stationarity, we show that both the DFTs and the tapered DFTs are asymptotically jointly independent Gaussian even when the DFTs share the same limiting frequencies. Based on these results, we establish an $\alpha$-mixing central limit theorem for a statistic formulated as a quadratic form of the tapered DFT. As applications, we derive the asymptotic distribution of the kernel spectral density estimator and establish a frequency domain inferential method for parametric stationary point processes. For the latter, the resulting model parameter estimator is computationally tractable and yields meaningful interpretations even in the case of model misspecification. We investigate the finite sample performance of our estimator through simulations, considering scenarios of both correctly specified and misspecified models. Furthermore, we extend our proposed DFT-based frequency domain methods to a class of non-stationary spatial point processes.
We consider the problem of sampling a multimodal distribution with a Markov chain given a small number of samples from the stationary measure. Although mixing can be arbitrarily slow, we show that if the Markov chain has a $k$th order spectral gap, initialization from a set of $\tilde O(k/\varepsilon^2)$ samples from the stationary distribution will, with high probability over the samples, efficiently generate a sample whose conditional law is $\varepsilon$-close in TV distance to the stationary measure. In particular, this applies to mixtures of $k$ distributions satisfying a Poincar\'e inequality, with faster convergence when they satisfy a log-Sobolev inequality. Our bounds are stable to perturbations to the Markov chain, and in particular work for Langevin diffusion over $\mathbb R^d$ with score estimation error, as well as Glauber dynamics combined with approximation error from pseudolikelihood estimation. This justifies the success of data-based initialization for score matching methods despite slow mixing for the data distribution, and improves and generalizes the results of Koehler and Vuong (2023) to have linear, rather than exponential, dependence on $k$ and apply to arbitrary semigroups. As a consequence of our results, we show for the first time that a natural class of low-complexity Ising measures can be efficiently learned from samples.
Learning causal effects of a binary exposure on time-to-event endpoints can be challenging because survival times may be partially observed due to censoring and systematically biased due to truncation. In this work, we present debiased machine learning-based nonparametric estimators of the joint distribution of a counterfactual survival time and baseline covariates for use when the observed data are subject to covariate-dependent left truncation and right censoring and when baseline covariates suffice to deconfound the relationship between exposure and survival time. Our inferential procedures explicitly allow the integration of flexible machine learning tools for nuisance estimation, and enjoy certain robustness properties. The approach we propose can be directly used to make pointwise or uniform inference on smooth summaries of the joint counterfactual survival time and covariate distribution, and can be valuable even in the absence of interventions, when summaries of a marginal survival distribution are of interest. We showcase how our procedures can be used to learn a variety of inferential targets and illustrate their performance in simulation studies.
Vintage factor analysis is one important type of factor analysis that aims to first find a low-dimensional representation of the original data, and then to seek a rotation such that the rotated low-dimensional representation is scientifically meaningful. The most widely used vintage factor analysis is the Principal Component Analysis (PCA) followed by the varimax rotation. Despite its popularity, little theoretical guarantee can be provided to date mainly because varimax rotation requires to solve a non-convex optimization over the set of orthogonal matrices. In this paper, we propose a deflation varimax procedure that solves each row of an orthogonal matrix sequentially. In addition to its net computational gain and flexibility, we are able to fully establish theoretical guarantees for the proposed procedure in a broader context. Adopting this new deflation varimax as the second step after PCA, we further analyze this two step procedure under a general class of factor models. Our results show that it estimates the factor loading matrix in the minimax optimal rate when the signal-to-noise-ratio (SNR) is moderate or large. In the low SNR regime, we offer possible improvement over using PCA and the deflation varimax when the additive noise under the factor model is structured. The modified procedure is shown to be minimax optimal in all SNR regimes. Our theory is valid for finite sample and allows the number of the latent factors to grow with the sample size as well as the ambient dimension to grow with, or even exceed, the sample size. Extensive simulation and real data analysis further corroborate our theoretical findings.
This paper deals with two fields related to active imaging system. First, we begin to explore image processing algorithms to restore the artefacts like speckle, scintillation and image dancing caused by atmospheric turbulence. Next, we examine how to evaluate the performance of this kind of systems. To do this task, we propose a modified version of the german TRM3 metric which permits to get MTF-like measures. We use the database acquired during NATO-TG40 field trials to make our tests.
We address the problem of identifying functional interactions among stochastic neurons with variable-length memory from their spiking activity. The neuronal network is modeled by a stochastic system of interacting point processes with variable-length memory. Each chain describes the activity of a single neuron, indicating whether it spikes at a given time. One neuron's influence on another can be either excitatory or inhibitory. To identify the existence and nature of an interaction between a neuron and its postsynaptic counterpart, we propose a model selection procedure based on the observation of the spike activity of a finite set of neurons over a finite time. The proposed procedure is also based on the maximum likelihood estimator for the synaptic weight matrix of the network neuronal model. In this sense, we prove the consistency of the maximum likelihood estimator followed by a proof of the consistency of the neighborhood interaction estimation procedure. The effectiveness of the proposed model selection procedure is demonstrated using simulated data, which validates the underlying theory. The method is also applied to analyze spike train data recorded from hippocampal neurons in rats during a visual attention task, where a computational model reconstructs the spiking activity and the results reveal interesting and biologically relevant information.
Robust and stable high order numerical methods for solving partial differential equations are attractive because they are efficient on modern and next generation hardware architectures. However, the design of provably stable numerical methods for nonlinear hyperbolic conservation laws pose a significant challenge. We present the dual-pairing (DP) and upwind summation-by-parts (SBP) finite difference (FD) framework for accurate and robust numerical approximations of nonlinear conservation laws. The framework has an inbuilt "limiter" whose goal is to detect and effectively resolve regions where the solution is poorly resolved and/or discontinuities are found. The DP SBP FD operators are a dual-pair of backward and forward FD stencils, which together preserve the SBP property. In addition, the DP SBP FD operators are designed to be upwind, that is they come with some innate dissipation everywhere, as opposed to traditional SBP and collocated discontinuous Galerkin spectral element methods which can only induce dissipation through numerical fluxes acting at element interfaces. We combine the DP SBP operators together with skew-symmetric and upwind flux splitting of nonlinear hyperbolic conservation laws. Our semi-discrete approximation is provably entropy-stable for arbitrary nonlinear hyperbolic conservation laws. The framework is high order accurate, provably entropy-stable, convergent, and avoids several pitfalls of current state-of-the-art high order methods. We give specific examples using the in-viscid Burger's equation, nonlinear shallow water equations and compressible Euler equations of gas dynamics. Numerical experiments are presented to verify accuracy and demonstrate the robustness of our numerical framework.
We perform a quantitative assessment of different strategies to compute the contribution due to surface tension in incompressible two-phase flows using a conservative level set (CLS) method. More specifically, we compare classical approaches, such as the direct computation of the curvature from the level set or the Laplace-Beltrami operator, with an evolution equation for the mean curvature recently proposed in literature. We consider the test case of a static bubble, for which an exact solution for the pressure jump across the interface is available, and the test case of an oscillating bubble, showing pros and cons of the different approaches.
Moist thermodynamics is a fundamental driver of atmospheric dynamics across all scales, making accurate modeling of these processes essential for reliable weather forecasts and climate change projections. However, atmospheric models often make a variety of inconsistent approximations in representing moist thermodynamics. These inconsistencies can introduce spurious sources and sinks of energy, potentially compromising the integrity of the models. Here, we present a thermodynamically consistent and structure preserving formulation of the moist compressible Euler equations. When discretised with a summation by parts method, our spatial discretisation conserves: mass, water, entropy, and energy. These properties are achieved by discretising a skew symmetric form of the moist compressible Euler equations, using entropy as a prognostic variable, and the summation-by-parts property of discrete derivative operators. Additionally, we derive a discontinuous Galerkin spectral element method with energy and tracer variance stable numerical fluxes, and experimentally verify our theoretical results through numerical simulations.
The susceptibility of timestepping algorithms to numerical instabilities is an important consideration when simulating partial differential equations (PDEs). Here we identify and analyze a pernicious numerical instability arising in pseudospectral simulations of nonlinear wave propagation resulting in finite-time blow-up. The blow-up time scale is independent of the spatial resolution and spectral basis but sensitive to the timestepping scheme and the timestep size. The instability appears in multi-step and multi-stage implicit-explicit (IMEX) timestepping schemes of different orders of accuracy and has been found to manifest in simulations of soliton solutions of the Korteweg-de Vries (KdV) equation and traveling wave solutions of a nonlinear generalized Klein-Gordon equation. Focusing on the case of KdV solitons, we show that modal predictions from linear stability theory are unable to explain the instability because the spurious growth from linear dispersion is small and nonlinear sources of error growth converge too slowly in the limit of small timestep size. We then develop a novel multi-scale asymptotic framework that captures the slow, nonlinear accumulation of timestepping errors. The framework allows the solution to vary with respect to multiple time scales related to the timestep size and thus recovers the instability as a function of a slow time scale dictated by the order of accuracy of the timestepping scheme. We show that this approach correctly describes our simulations of solitons by making accurate predictions of the blow-up time scale and transient features of the instability. Our work demonstrates that studies of long-time simulations of nonlinear waves should exercise caution when validating their timestepping schemes.