Complex field imaging, which captures both the amplitude and phase information of input optical fields or objects, can offer rich structural insights into samples, such as their absorption and refractive index distributions. However, conventional image sensors are intensity-based and inherently lack the capability to directly measure the phase distribution of a field. This limitation can be overcome using interferometric or holographic methods, often supplemented by iterative phase retrieval algorithms, leading to a considerable increase in hardware complexity and computational demand. Here, we present a complex field imager design that enables snapshot imaging of both the amplitude and quantitative phase information of input fields using an intensity-based sensor array without any digital processing. Our design utilizes successive deep learning-optimized diffractive surfaces that are structured to collectively modulate the input complex field, forming two independent imaging channels that perform amplitude-to-amplitude and phase-to-intensity transformations between the input and output planes within a compact optical design, axially spanning ~100 wavelengths. The intensity distributions of the output fields at these two channels on the sensor plane directly correspond to the amplitude and quantitative phase profiles of the input complex field, eliminating the need for any digital image reconstruction algorithms. We experimentally validated the efficacy of our complex field diffractive imager designs through 3D-printed prototypes operating at the terahertz spectrum, with the output amplitude and phase channel images closely aligning with our numerical simulations. We envision that this complex field imager will have various applications in security, biomedical imaging, sensing and material science, among others.
We propose a new approach to the autoregressive spatial functional model, based on the notion of signature, which represents a function as an infinite series of its iterated integrals. It presents the advantage of being applicable to a wide range of processes. After having provided theoretical guarantees to the proposed model, we have shown in a simulation study and on a real data set that this new approach presents competitive performances compared to the traditional model.
Bayesian inference for complex models with an intractable likelihood can be tackled using algorithms performing many calls to computer simulators. These approaches are collectively known as "simulation-based inference" (SBI). Recent SBI methods have made use of neural networks (NN) to provide approximate, yet expressive constructs for the unavailable likelihood function and the posterior distribution. However, they do not generally achieve an optimal trade-off between accuracy and computational demand. In this work, we propose an alternative that provides both approximations to the likelihood and the posterior distribution, using structured mixtures of probability distributions. Our approach produces accurate posterior inference when compared to state-of-the-art NN-based SBI methods, while exhibiting a much smaller computational footprint. We illustrate our results on several benchmark models from the SBI literature.
In this paper, we harness a result in point process theory, specifically the expectation of the weighted $K$-function, where the weighting is done by the true first-order intensity function. This theoretical result can be employed as an estimation method to derive parameter estimates for a particular model assumed for the data. The underlying motivation is to avoid the difficulties associated with dealing with complex likelihoods in point process models and their maximization. The exploited result makes our method theoretically applicable to any model specification. In this paper, we restrict our study to Poisson models, whose likelihood represents the base for many more complex point process models. In this context, our proposed method can estimate the vector of local parameters that correspond to the points within the analyzed point pattern without introducing any additional complexity compared to the global estimation. We illustrate the method through simulation studies for both purely spatial and spatio-temporal point processes and show complex scenarios based on the Poisson model through the analysis of two real datasets concerning environmental problems.
We study space--time isogeometric discretizations of the linear acoustic wave equation that use splines of arbitrary degree p, both in space and time. We propose a space--time variational formulation that is obtained by adding a non-consistent penalty term of order 2p+2 to the bilinear form coming from integration by parts. This formulation, when discretized with tensor-product spline spaces with maximal regularity in time, is unconditionally stable: the mesh size in time is not constrained by the mesh size in space. We give extensive numerical evidence for the good stability, approximation, dissipation and dispersion properties of the stabilized isogeometric formulation, comparing against stabilized finite element schemes, for a range of wave propagation problems with constant and variable wave speed.
Varimax factor rotations, while popular among practitioners in psychology and statistics since being introduced by H. Kaiser, have historically been viewed with skepticism and suspicion by some theoreticians and mathematical statisticians. Now, work by K. Rohe and M. Zeng provides new, fundamental insight: varimax rotations provably perform statistical estimation in certain classes of latent variable models when paired with spectral-based matrix truncations for dimensionality reduction. We build on this newfound understanding of varimax rotations by developing further connections to network analysis and spectral methods rooted in entrywise matrix perturbation analysis. Concretely, this paper establishes the asymptotic multivariate normality of vectors in varimax-transformed Euclidean point clouds that represent low-dimensional node embeddings in certain latent space random graph models. We address related concepts including network sparsity, data denoising, and the role of matrix rank in latent variable parameterizations. Collectively, these findings, at the confluence of classical and contemporary multivariate analysis, reinforce methodology and inference procedures grounded in matrix factorization-based techniques. Numerical examples illustrate our findings and supplement our discussion.
The sparsity-ranked lasso (SRL) has been developed for model selection and estimation in the presence of interactions and polynomials. The main tenet of the SRL is that an algorithm should be more skeptical of higher-order polynomials and interactions *a priori* compared to main effects, and hence the inclusion of these more complex terms should require a higher level of evidence. In time series, the same idea of ranked prior skepticism can be applied to the possibly seasonal autoregressive (AR) structure of the series during the model fitting process, becoming especially useful in settings with uncertain or multiple modes of seasonality. The SRL can naturally incorporate exogenous variables, with streamlined options for inference and/or feature selection. The fitting process is quick even for large series with a high-dimensional feature set. In this work, we discuss both the formulation of this procedure and the software we have developed for its implementation via the **fastTS** R package. We explore the performance of our SRL-based approach in a novel application involving the autoregressive modeling of hourly emergency room arrivals at the University of Iowa Hospitals and Clinics. We find that the SRL is considerably faster than its competitors, while producing more accurate predictions.
Mendelian randomization is an instrumental variable method that utilizes genetic information to investigate the causal effect of a modifiable exposure on an outcome. In most cases, the exposure changes over time. Understanding the time-varying causal effect of the exposure can yield detailed insights into mechanistic effects and the potential impact of public health interventions. Recently, a growing number of Mendelian randomization studies have attempted to explore time-varying causal effects. However, the proposed approaches oversimplify temporal information and rely on overly restrictive structural assumptions, limiting their reliability in addressing time-varying causal problems. This paper considers a novel approach to estimate time-varying effects through continuous-time modelling by combining functional principal component analysis and weak-instrument-robust techniques. Our method effectively utilizes available data without making strong structural assumptions and can be applied in general settings where the exposure measurements occur at different timepoints for different individuals. We demonstrate through simulations that our proposed method performs well in estimating time-varying effects and provides reliable inference results when the time-varying effect form is correctly specified. The method could theoretically be used to estimate arbitrarily complex time-varying effects. However, there is a trade-off between model complexity and instrument strength. Estimating complex time-varying effects requires instruments that are unrealistically strong. We illustrate the application of this method in a case study examining the time-varying effects of systolic blood pressure on urea levels.
This work proposes a discretization of the acoustic wave equation with possibly oscillatory coefficients based on a superposition of discrete solutions to spatially localized subproblems computed with an implicit time discretization. Based on exponentially decaying entries of the global system matrices and an appropriate partition of unity, it is proved that the superposition of localized solutions is appropriately close to the solution of the (global) implicit scheme. It is thereby justified that the localized (and especially parallel) computation on multiple overlapping subdomains is reasonable. Moreover, a re-start is introduced after a certain amount of time steps to maintain a moderate overlap of the subdomains. Overall, the approach may be understood as a domain decomposition strategy in space on successive short time intervals that completely avoids inner iterations. Numerical examples are presented.
Temporal network data is often encoded as time-stamped interaction events between senders and receivers, such as co-authoring scientific articles or communication via email. A number of relational event frameworks have been proposed to address specific issues raised by complex temporal dependencies. These models attempt to quantify how individual behaviour, endogenous and exogenous factors, as well as interactions with other individuals modify the network dynamics over time. It is often of interest to determine whether changes in the network can be attributed to endogenous mechanisms reflecting natural relational tendencies, such as reciprocity or triadic effects. The propensity to form or receive ties can also, at least partially, be related to actor attributes. Nodal heterogeneity in the network is often modelled by including actor-specific or dyadic covariates. However, comprehensively capturing all personality traits is difficult in practice, if not impossible. A failure to account for heterogeneity may confound the substantive effect of key variables of interest. This work shows that failing to account for node level sender and receiver effects can induce ghost triadic effects. We propose a random-effect extension of the relational event model to deal with these problems. We show that it is often effective over more traditional approaches, such as in-degree and out-degree statistics. These results that the violation of the hierarchy principle due to insufficient information about nodal heterogeneity can be resolved by including random effects in the relational event model as a standard.
We define an asymptotically normal wavelet-based strongly consistent estimator for the Hurst parameter of any Hermite processes. This estimator is obtained by considering a modified wavelet variation in which coefficients are wisely chosen to be, up to negligeable remainders, independent. We use Stein-Malliavin calculus to prove that this wavelet variation satisfies a multidimensional Central Limit Theorem, with an explicit bound for the Wasserstein distance.