This paper presents a novel approach for estimating the Koopman operator defined on a reproducing kernel Hilbert space (RKHS) and its spectra. We propose an estimation method, what we call Jet Dynamic Mode Decomposition (JetDMD), leveraging the intrinsic structure of RKHS and the geometric notion known as jets to enhance the estimation of the Koopman operator. This method refines the traditional Extended Dynamic Mode Decomposition (EDMD) in accuracy, especially in the numerical estimation of eigenvalues. This paper proves JetDMD's superiority through explicit error bounds and convergence rate for special positive definite kernels, offering a solid theoretical foundation for its performance. We also delve into the spectral analysis of the Koopman operator, proposing the notion of extended Koopman operator within a framework of rigged Hilbert space. This notion leads to a deeper understanding of estimated Koopman eigenfunctions and capturing them outside the original function space. Through the theory of rigged Hilbert space, our study provides a principled methodology to analyze the estimated spectrum and eigenfunctions of Koopman operators, and enables eigendecomposition within a rigged RKHS. We also propose a new effective method for reconstructing the dynamical system from temporally-sampled trajectory data of the dynamical system with solid theoretical guarantee. We conduct several numerical simulations using the van der Pol oscillator, the Duffing oscillator, the H\'enon map, and the Lorenz attractor, and illustrate the performance of JetDMD with clear numerical computations of eigenvalues and accurate predictions of the dynamical systems.
We study a query model of computation in which an n-vertex k-hypergraph can be accessed only via its independence oracle or via its colourful independence oracle, and each oracle query may incur a cost depending on the size of the query. In each of these models, we obtain oracle algorithms to approximately count the hypergraph's edges, and we unconditionally prove that no oracle algorithm for this problem can have significantly smaller worst-case oracle cost than our algorithms.
MIMO technology has been studied in textbooks for several decades, and it has been adopted in 4G and 5G systems. Due to the recent evolution in 5G and beyond networks, designed to cover a wide range of use cases with every time more complex applications, it is essential to have network simulation tools (such as ns-3) to evaluate MIMO performance from the network perspective, before real implementation. Up to date, the well-known ns-3 simulator has been missing the inclusion of single-user MIMO (SU-MIMO) models for 5G. In this paper, we detail the implementation models and provide an exhaustive evaluation of SU-MIMO in the 5G-LENA module of ns-3. As per 3GPP 5G, we adopt a hybrid beamforming architecture and a closed-loop MIMO mechanism and follow all 3GPP specifications for MIMO implementation, including channel state information feedback with precoding matrix indicator and rank indicator reports, and codebook-based precoding following Precoding Type-I (used for SU-MIMO). The simulation models are released in open-source and currently support up to 32 antenna ports and 4 streams per user. The simulation results presented in this paper help in testing and verifying the simulated models, for different multi-antenna array and antenna ports configurations.
We consider covariance parameter estimation for Gaussian processes with functional inputs. From an increasing-domain asymptotics perspective, we prove the asymptotic consistency and normality of the maximum likelihood estimator. We extend these theoretical guarantees to encompass scenarios accounting for approximation errors in the inputs, which allows robustness of practical implementations relying on conventional sampling methods or projections onto a functional basis. Loosely speaking, both consistency and normality hold when the approximation error becomes negligible, a condition that is often achieved as the number of samples or basis functions becomes large. These later asymptotic properties are illustrated through analytical examples, including one that covers the case of non-randomly perturbed grids, as well as several numerical illustrations.
We propose an interdisciplinary framework that combines Bayesian predictive inference, a well-established tool in Machine Learning, with Formal Methods rooted in the computer science community. Bayesian predictive inference allows for coherently incorporating uncertainty about unknown quantities by making use of methods or models that produce predictive distributions, which in turn inform decision problems. By formalizing these decision problems into properties with the help of spatio-temporal logic, we can formulate and predict how likely such properties are to be satisfied in the future at a certain location. Moreover, we can leverage our methodology to evaluate and compare models directly on their ability to predict the satisfaction of application-driven properties. The approach is illustrated in an urban mobility application, where the crowdedness in the center of Milan is proxied by aggregated mobile phone traffic data. We specify several desirable spatio-temporal properties related to city crowdedness such as a fault-tolerant network or the reachability of hospitals. After verifying these properties on draws from the posterior predictive distributions, we compare several spatio-temporal Bayesian models based on their overall and property-based predictive performance.
Blind single image super-resolution (SISR) is a challenging task in image processing due to the ill-posed nature of the inverse problem. Complex degradations present in real life images make it difficult to solve this problem using na\"ive deep learning approaches, where models are often trained on synthetically generated image pairs. Most of the effort so far has been focused on solving the inverse problem under some constraints, such as for a limited space of blur kernels and/or assuming noise-free input images. Yet, there is a gap in the literature to provide a well-generalized deep learning-based solution that performs well on images with unknown and highly complex degradations. In this paper, we propose IKR-Net (Iterative Kernel Reconstruction Network) for blind SISR. In the proposed approach, kernel and noise estimation and high-resolution image reconstruction are carried out iteratively using dedicated deep models. The iterative refinement provides significant improvement in both the reconstructed image and the estimated blur kernel even for noisy inputs. IKR-Net provides a generalized solution that can handle any type of blur and level of noise in the input low-resolution image. IKR-Net achieves state-of-the-art results in blind SISR, especially for noisy images with motion blur.
The covXtreme software provides functionality for estimation of marginal and conditional extreme value models, non-stationary with respect to covariates, and environmental design contours. Generalised Pareto (GP) marginal models of peaks over threshold are estimated, using a piecewise-constant representation for the variation of GP threshold and scale parameters on the (potentially multidimensional) covariate domain of interest. The conditional variation of one or more associated variates, given a large value of a single conditioning variate, is described using the conditional extremes model of Heffernan and Tawn (2004), the slope term of which is also assumed to vary in a piecewise constant manner with covariates. Optimal smoothness of marginal and conditional extreme value model parameters with respect to covariates is estimated using cross-validated roughness-penalised maximum likelihood estimation. Uncertainties in model parameter estimates due to marginal and conditional extreme value threshold choice, and sample size, are quantified using a bootstrap resampling scheme. Estimates of environmental contours using various schemes, including the direct sampling approach of Huseby et al. 2013, are calculated by simulation or numerical integration under fitted models. The software was developed in MATLAB for metocean applications, but is applicable generally to multivariate samples of peaks over threshold. The software and case study data can be downloaded from GitHub, with an accompanying user guide.
After reviewing a large body of literature on the modeling of bivariate discrete distributions with finite support, \cite{Gee20} made a compelling case for the use of $I$-projections in the sense of \cite{Csi75} as a sound way to attempt to decompose a bivariate probability mass function (p.m.f.) into its two univariate margins and a bivariate p.m.f.\ with uniform margins playing the role of a discrete copula. From a practical perspective, the necessary $I$-projections on Fr\'echet classes can be carried out using the iterative proportional fitting procedure (IPFP), also known as Sinkhorn's algorithm or matrix scaling in the literature. After providing conditions under which a bivariate p.m.f.\ can be decomposed in the aforementioned sense, we investigate, for starting bivariate p.m.f.s with rectangular supports, nonparametric and parametric estimation procedures as well as goodness-of-fit tests for the underlying discrete copula. Related asymptotic results are provided and build upon a differentiability result for $I$-projections on Fr\'echet classes which can be of independent interest. Theoretical results are complemented by finite-sample experiments and a data example.
Unmanned Surface Vehicles (USVs) play a pivotal role in various applications, including surface rescue, commercial transactions, scientific exploration, water rescue, and military operations. The effective control of high-speed unmanned surface boats stands as a critical aspect within the overall USV system, particularly in challenging environments marked by complex surface obstacles and dynamic conditions, such as time-varying surges, non-directional forces, and unpredictable winds. In this paper, we propose a data-driven control method based on Koopman theory. This involves constructing a high-dimensional linear model by mapping a low-dimensional nonlinear model to a higher-dimensional linear space through data identification. The observable USVs dynamical system is dynamically reconstructed using online error learning. To enhance tracking control accuracy, we utilize a Constructive Lyapunov Function (CLF)-Control Barrier Function (CBF)-Quadratic Programming (QP) approach to regulate the high-dimensional linear dynamical system obtained through identification. This approach facilitates error compensation, thereby achieving more precise tracking control.
In contemporary problems involving genetic or neuroimaging data, thousands of hypotheses need to be tested. Due to their high power, and finite sample guarantees on type-1 error under weak assumptions, Monte-Carlo permutation tests are often considered as gold standard for these settings. However, the enormous computational effort required for (thousands of) permutation tests is a major burden. Recently, Fischer and Ramdas (2024) constructed a permutation test for a single hypothesis in which the permutations are drawn sequentially one-by-one and the testing process can be stopped at any point without inflating the type I error. They showed that the number of permutations can be substantially reduced (under null and alternative) while the power remains similar. We show how their approach can be modified to make it suitable for a broad class of multiple testing procedures. In particular, we discuss its use with the Benjamini-Hochberg procedure and illustrate the application on a large dataset.
When writing high-performance code for numerical computation in a scripting language like MATLAB, it is crucial to have the operations in a large for-loop vectorized. If not, the code becomes too slow to use, even for a moderately large problem. However, in the process of vectorizing, the code often loses its original structure and becomes less readable. This is particularly true in the case of a finite element implementation, even though finite element methods are inherently structured. A basic remedy to this is the separation of the vectorization part from the mathematics part of the code, which is easily achieved through building the code on top of the basic linear algebra subprograms that are already vectorized codes, an idea that has been used in a series of papers over the last fifteen years, developing codes that are fast and still structured and readable. We discuss the vectorized basic linear algebra package and introduce a formalism using multi-linear algebra to explain and define formally the functions in the package, as well as MATLAB pagetime functions. We provide examples from computations of varying complexity, including the computation of normal vectors, volumes, and finite element methods. Benchmarking shows that we also get fast computations. Using the library, we can write codes that closely follow our mathematical thinking, making writing, following, reusing, and extending the code easier.