Imaging problems such as the one in nanoCT require the solution of an inverse problem, where it is often taken for granted that the forward operator, i.e., the underlying physical model, is properly known. In the present work we address the problem where the forward model is inexact due to stochastic or deterministic deviations during the measurement process. We particularly investigate the performance of non-learned iterative reconstruction methods dealing with inexactness and learned reconstruction schemes, which are based on U-Nets and conditional invertible neural networks. The latter also provide the opportunity for uncertainty quantification. A synthetic large data set in line with a typical nanoCT setting is provided and extensive numerical experiments are conducted evaluating the proposed methods.
This article presents a new tool for the automatic detection of meteors. Fast Meteor Detection Toolbox (FMDT) is able to detect meteor sightings by analyzing videos acquired by cameras onboard weather balloons or within airplane with stabilization. The challenge consists in designing a processing chain composed of simple algorithms, that are robust to the high fluctuation of the videos and that satisfy the constraints on power consumption (10 W) and real-time processing (25 frames per second).
Permutation tests are widely recognized as robust alternatives to tests based on the normal theory. Random permutation tests have been frequently employed to assess the significance of variables in linear models. Despite their widespread use, existing random permutation tests lack finite-sample and assumption-free guarantees for controlling type I error in partial correlation tests. To address this standing challenge, we develop a conformal test through permutation-augmented regressions, which we refer to as PALMRT. PALMRT not only achieves power competitive with conventional methods but also provides reliable control of type I errors at no more than $2\alpha$ given any targeted level $\alpha$, for arbitrary fixed-designs and error distributions. We confirmed this through extensive simulations. Compared to the cyclic permutation test (CPT), which also offers theoretical guarantees, PALMRT does not significantly compromise power or set stringent requirements on the sample size, making it suitable for diverse biomedical applications. We further illustrate their differences in a long-Covid study where PALMRT validated key findings previously identified using the t-test, while CPT suffered from a drastic loss of power. We endorse PALMRT as a robust and practical hypothesis test in scientific research for its superior error control, power preservation, and simplicity.
Noise is usually regarded as adversarial to extract the effective dynamics from time series, such that the conventional data-driven approaches usually aim at learning the dynamics by mitigating the noisy effect. However, noise can have a functional role of driving transitions between stable states underlying many natural and engineered stochastic dynamics. To capture such stochastic transitions from data, we find that leveraging a machine learning model, reservoir computing as a type of recurrent neural network, can learn noise-induced transitions. We develop a concise training protocol for tuning hyperparameters, with a focus on a pivotal hyperparameter controlling the time scale of the reservoir dynamics. The trained model generates accurate statistics of transition time and the number of transitions. The approach is applicable to a wide class of systems, including a bistable system under a double-well potential, with either white noise or colored noise. It is also aware of the asymmetry of the double-well potential, the rotational dynamics caused by non-detailed balance, and transitions in multi-stable systems. For the experimental data of protein folding, it learns the transition time between folded states, providing a possibility of predicting transition statistics from a small dataset. The results demonstrate the capability of machine-learning methods in capturing noise-induced phenomena.
Reinforcement learning of real-world tasks is very data inefficient, and extensive simulation-based modelling has become the dominant approach for training systems. However, in human-robot interaction and many other real-world settings, there is no appropriate one-model-for-all due to differences in individual instances of the system (e.g. different people) or necessary oversimplifications in the simulation models. This requires two approaches: 1. either learning the individual system's dynamics approximately from data which requires data-intensive training or 2. using a complete digital twin of the instances, which may not be realisable in many cases. We introduce two approaches: co-kriging adjustments (CKA) and ridge regression adjustment (RRA) as novel ways to combine the advantages of both approaches. Our adjustment methods are based on an auto-regressive AR1 co-kriging model that we integrate with GP priors. This yield a data- and simulation-efficient way of using simplistic simulation models (e.g., simple two-link model) and rapidly adapting them to individual instances (e.g., biomechanics of individual people). Using CKA and RRA, we obtain more accurate uncertainty quantification of the entire system's dynamics than pure GP-based and AR1 methods. We demonstrate the efficiency of co-kriging adjustment with an interpretable reinforcement learning control example, learning to control a biomechanical human arm using only a two-link arm simulation model (offline part) and CKA derived from a small amount of interaction data (on-the-fly online). Our method unlocks an efficient and uncertainty-aware way to implement reinforcement learning methods in real world complex systems for which only imperfect simulation models exist.
Cram\'er's moderate deviations give a quantitative estimate for the relative error of the normal approximation and provide theoretical justifications for many estimator used in statistics. In this paper, we establish self-normalized Cram\'{e}r type moderate deviations for martingales under some mile conditions. The result extends an earlier work of Fan, Grama, Liu and Shao [Bernoulli, 2019]. Moreover, applications of our result to Student's statistic, stationary martingale difference sequences and branching processes in a random environment are also discussed. In particular, we establish Cram\'{e}r type moderate deviations for Student's $t$-statistic for branching processes in a random environment.
In this paper, we propose a Riemannian Acceleration with Preconditioning (RAP) for symmetric eigenvalue problems, which is one of the most important geodesically convex optimization problem on Riemannian manifold, and obtain the acceleration. Firstly, the preconditioning for symmetric eigenvalue problems from the Riemannian manifold viewpoint is discussed. In order to obtain the local geodesic convexity, we develop the leading angle to measure the quality of the preconditioner for symmetric eigenvalue problems. A new Riemannian acceleration, called Locally Optimal Riemannian Accelerated Gradient (LORAG) method, is proposed to overcome the local geodesic convexity for symmetric eigenvalue problems. With similar techniques for RAGD and analysis of local convex optimization in Euclidean space, we analyze the convergence of LORAG. Incorporating the local geodesic convexity of symmetric eigenvalue problems under preconditioning with the LORAG, we propose the Riemannian Acceleration with Preconditioning (RAP) and prove its acceleration. Additionally, when the Schwarz preconditioner, especially the overlapping or non-overlapping domain decomposition method, is applied for elliptic eigenvalue problems, we also obtain the rate of convergence as $1-C\kappa^{-1/2}$, where $C$ is a constant independent of the mesh sizes and the eigenvalue gap, $\kappa=\kappa_{\nu}\lambda_{2}/(\lambda_{2}-\lambda_{1})$, $\kappa_{\nu}$ is the parameter from the stable decomposition, $\lambda_{1}$ and $\lambda_{2}$ are the smallest two eigenvalues of the elliptic operator. Numerical results show the power of Riemannian acceleration and preconditioning.
We consider the linear lambda-calculus extended with the sup type constructor, which provides an additive conjunction along with a non-deterministic destructor. The sup type constructor has been introduced in the context of quantum computing. In this paper, we study this type constructor within a simple linear logic categorical model, employing the category of semimodules over a commutative semiring. We demonstrate that the non-deterministic destructor finds a suitable model in a "weighted" codiagonal map. This approach offers a valid and insightful alternative to interpreting non-determinism, especially in instances where the conventional Powerset Monad interpretation does not align with the category's structure, as is the case with the category of semimodules. The validity of this alternative relies on the presence of biproducts within the category.
It is disproved the Tokareva's conjecture that any balanced boolean function of appropriate degree is a derivative of some bent function. This result is based on new upper bounds for the numbers of bent and plateaued functions.
Judging whether an integer can be divided by prime numbers such as 2 or 3 may appear trivial to human beings, but can be less straightforward for computers. Here, we tested multiple deep learning architectures and feature engineering approaches on classifying integers based on their residues when divided by small prime numbers. We found that the ability of classification critically depends on the feature space. We also evaluated Automated Machine Learning (AutoML) platforms from Amazon, Google and Microsoft, and found that they failed on this task without appropriately engineered features. Furthermore, we introduced a method that utilizes linear regression on Fourier series basis vectors, and demonstrated its effectiveness. Finally, we evaluated Large Language Models (LLMs) such as GPT-4, GPT-J, LLaMA and Falcon, and demonstrated their failures. In conclusion, feature engineering remains an important task to improve performance and increase interpretability of machine-learning models, even in the era of AutoML and LLMs.
In recent years, object detection has experienced impressive progress. Despite these improvements, there is still a significant gap in the performance between the detection of small and large objects. We analyze the current state-of-the-art model, Mask-RCNN, on a challenging dataset, MS COCO. We show that the overlap between small ground-truth objects and the predicted anchors is much lower than the expected IoU threshold. We conjecture this is due to two factors; (1) only a few images are containing small objects, and (2) small objects do not appear enough even within each image containing them. We thus propose to oversample those images with small objects and augment each of those images by copy-pasting small objects many times. It allows us to trade off the quality of the detector on large objects with that on small objects. We evaluate different pasting augmentation strategies, and ultimately, we achieve 9.7\% relative improvement on the instance segmentation and 7.1\% on the object detection of small objects, compared to the current state of the art method on MS COCO.