Monitoring the correctness of distributed cyber-physical systems is essential. Detecting possible safety violations can be hard when some samples are uncertain or missing. We monitor here black-box cyber-physical system, with logs being uncertain both in the state and timestamp dimensions: that is, not only the logged value is known with some uncertainty, but the time at which the log was made is uncertain too. In addition, we make use of an over-approximated yet expressive model, given by a non-linear extension of dynamical systems. Given an offline log, our approach is able to monitor the log against safety specifications with a limited number of false alarms. As a second contribution, we show that our approach can be used online to minimize the number of sample triggers, with the aim at energetic efficiency. We apply our approach to three benchmarks, an anesthesia model, an adaptive cruise controller and an aircraft orbiting system.
There has recently been an explosion of interest in how "higher-order" structures emerge in complex systems. This "emergent" organization has been found in a variety of natural and artificial systems, although at present the field lacks a unified understanding of what the consequences of higher-order synergies and redundancies are for systems. Typical research treat the presence (or absence) of synergistic information as a dependent variable and report changes in the level of synergy in response to some change in the system. Here, we attempt to flip the script: rather than treating higher-order information as a dependent variable, we use evolutionary optimization to evolve boolean networks with significant higher-order redundancies, synergies, or statistical complexity. We then analyse these evolved populations of networks using established tools for characterizing discrete dynamics: the number of attractors, average transient length, and Derrida coefficient. We also assess the capacity of the systems to integrate information. We find that high-synergy systems are unstable and chaotic, but with a high capacity to integrate information. In contrast, evolved redundant systems are extremely stable, but have negligible capacity to integrate information. Finally, the complex systems that balance integration and segregation (known as Tononi-Sporns-Edelman complexity) show features of both chaosticity and stability, with a greater capacity to integrate information than the redundant systems while being more stable than the random and synergistic systems. We conclude that there may be a fundamental trade-off between the robustness of a systems dynamics and its capacity to integrate information (which inherently requires flexibility and sensitivity), and that certain kinds of complexity naturally balance this trade-off.
The notion of an e-value has been recently proposed as a possible alternative to critical regions and p-values in statistical hypothesis testing. In this paper we consider testing the nonparametric hypothesis of symmetry, introduce analogues for e-values of three popular nonparametric tests, define an analogue for e-values of Pitman's asymptotic relative efficiency, and apply it to the three nonparametric tests. We discuss limitations of our simple definition of asymptotic relative efficiency and list directions of further research.
This paper aims to front with dimensionality reduction in regression setting when the predictors are a mixture of functional variable and high-dimensional vector. A flexible model, combining both sparse linear ideas together with semiparametrics, is proposed. A wide scope of asymptotic results is provided: this covers as well rates of convergence of the estimators as asymptotic behaviour of the variable selection procedure. Practical issues are analysed through finite sample simulated experiments while an application to Tecator's data illustrates the usefulness of our methodology.
Since the start of the operational use of ensemble prediction systems, ensemble-based probabilistic forecasting has become the most advanced approach in weather prediction. However, despite the persistent development of the last three decades, ensemble forecasts still often suffer from the lack of calibration and might exhibit systematic bias, which calls for some form of statistical post-processing. Nowadays, one can choose from a large variety of post-processing approaches, where parametric methods provide full predictive distributions of the investigated weather quantity. Parameter estimation in these models is based on training data consisting of past forecast-observation pairs, thus post-processed forecasts are usually available only at those locations where training data are accessible. We propose a general clustering-based interpolation technique of extending calibrated predictive distributions from observation stations to any location in the ensemble domain where there are ensemble forecasts at hand. Focusing on the ensemble model output statistics (EMOS) post-processing technique, in a case study based on wind speed ensemble forecasts of the European Centre for Medium-Range Weather Forecasts, we demonstrate the predictive performance of various versions of the suggested method and show its superiority over the regionally estimated and interpolated EMOS models and the raw ensemble forecasts as well.
Asymptotic analysis for related inference problems often involves similar steps and proofs. These intermediate results could be shared across problems if each of them is made self-contained and easily identified. However, asymptotic analysis using Taylor expansions is limited for result borrowing because it is a step-to-step procedural approach. This article introduces EEsy, a modular system for estimating finite and infinitely dimensional parameters in related inference problems. It is based on the infinite-dimensional Z-estimation theorem, Donsker and Glivenko-Cantelli preservation theorems, and weight calibration techniques. This article identifies the systematic nature of these tools and consolidates them into one system containing several modules, which can be built, shared, and extended in a modular manner. This change to the structure of method development allows related methods to be developed in parallel and complex problems to be solved collaboratively, expediting the development of new analytical methods. This article considers four related inference problems -- estimating parameters with random sampling, two-phase sampling, auxiliary information incorporation, and model misspecification. We illustrate this modular approach by systematically developing 9 parameter estimators and 18 variance estimators for the four related inference problems regarding semi-parametric additive hazards models. Simulation studies show the obtained asymptotic results for these 27 estimators are valid. In the end, I describe how this system can simplify the use of empirical process theory, a powerful but challenging tool to be adopted by the broad community of methods developers. I discuss challenges and the extension of this system to other inference problems.
Time to an event of interest over a lifetime is a central measure of the clinical benefit of an intervention used in a health technology assessment (HTA). Within the same trial multiple end-points may also be considered. For example, overall and progression-free survival time for different drugs in oncology studies. A common challenge is when an intervention is only effective for some proportion of the population who are not clinically identifiable. Therefore, latent group membership as well as separate survival models for groups identified need to be estimated. However, follow-up in trials may be relatively short leading to substantial censoring. We present a general Bayesian hierarchical framework that can handle this complexity by exploiting the similarity of cure fractions between end-points; accounting for the correlation between them and improving the extrapolation beyond the observed data. Assuming exchangeability between cure fractions facilitates the borrowing of information between end-points. We show the benefits of using our approach with a motivating example, the CheckMate 067 phase 3 trial consisting of patients with metastatic melanoma treated with first line therapy.
Object recognition and object pose estimation in robotic grasping continue to be significant challenges, since building a labelled dataset can be time consuming and financially costly in terms of data collection and annotation. In this work, we propose a synthetic data generation method that minimizes human intervention and makes downstream image segmentation algorithms more robust by combining a generated synthetic dataset with a smaller real-world dataset (hybrid dataset). Annotation experiments show that the proposed synthetic scene generation can diminish labelling time dramatically. RGB image segmentation is trained with hybrid dataset and combined with depth information to produce pixel-to-point correspondence of individual segmented objects. The object to grasp is then determined by the confidence score of the segmentation algorithm. Pick-and-place experiments demonstrate that segmentation trained on our hybrid dataset (98.9%, 70%) outperforms the real dataset and a publicly available dataset by (6.7%, 18.8%) and (2.8%, 10%) in terms of labelling and grasping success rate, respectively. Supplementary material is available at //sites.google.com/view/synthetic-dataset-generation.
Remotely sensed data are dominated by mixed Land Use and Land Cover (LULC) types. Spectral unmixing (SU) is a key technique that disentangles mixed pixels into constituent LULC types and their abundance fractions. While existing studies on Deep Learning (DL) for SU typically focus on single time-step hyperspectral (HS) or multispectral (MS) data, our work pioneers SU using MODIS MS time series, addressing missing data with end-to-end DL models. Our approach enhances a Long-Short Term Memory (LSTM)-based model by incorporating geographic, topographic (geo-topographic), and climatic ancillary information. Notably, our method eliminates the need for explicit endmember extraction, instead learning the input-output relationship between mixed spectra and LULC abundances through supervised learning. Experimental results demonstrate that integrating spectral-temporal input data with geo-topographic and climatic information significantly improves the estimation of LULC abundances in mixed pixels. To facilitate this study, we curated a novel labeled dataset for Andalusia (Spain) with monthly MODIS multispectral time series at 460m resolution for 2013. Named Andalusia MultiSpectral MultiTemporal Unmixing (Andalusia-MSMTU), this dataset provides pixel-level annotations of LULC abundances along with ancillary information. The dataset (//zenodo.org/records/7752348) and code (//github.com/jrodriguezortega/MSMTU) are available to the public.
We consider the posets of equivalence relations on finite sets under the standard embedding ordering and under the consecutive embedding ordering. In the latter case, the relations are also assumed to have an underlying linear order, which governs consecutive embeddings. For each poset we ask the well quasi-order and atomicity decidability questions: Given finitely many equivalence relations $\rho_1,\dots,\rho_k$, is the downward closed set Av$(\rho_1,\dots,\rho_k)$ consisting of all equivalence relations which do not contain any of $\rho_1,\dots,\rho_k$: (a) well-quasi-ordered, meaning that it contains no infinite antichains? and (b) atomic, meaning that it is not a union of two proper downward closed subsets, or, equivalently, that it satisfies the joint embedding property?
Graph representation learning for hypergraphs can be used to extract patterns among higher-order interactions that are critically important in many real world problems. Current approaches designed for hypergraphs, however, are unable to handle different types of hypergraphs and are typically not generic for various learning tasks. Indeed, models that can predict variable-sized heterogeneous hyperedges have not been available. Here we develop a new self-attention based graph neural network called Hyper-SAGNN applicable to homogeneous and heterogeneous hypergraphs with variable hyperedge sizes. We perform extensive evaluations on multiple datasets, including four benchmark network datasets and two single-cell Hi-C datasets in genomics. We demonstrate that Hyper-SAGNN significantly outperforms the state-of-the-art methods on traditional tasks while also achieving great performance on a new task called outsider identification. Hyper-SAGNN will be useful for graph representation learning to uncover complex higher-order interactions in different applications.