Non-stationary signals are ubiquitous in real life. Many techniques have been proposed in the last decades which allow decomposing multi-component signals into simple oscillatory mono-components, like the groundbreaking Empirical Mode Decomposition technique and the Iterative Filtering method. When a signal contains mono-components that have rapid varying instantaneous frequencies, we can think, for instance, to chirps or whistles, it becomes particularly hard for most techniques to properly factor out these components. The Adaptive Local Iterative Filtering technique has recently gained interest in many applied fields of research for being able to deal with non-stationary signals presenting amplitude and frequency modulation. In this work, we address the open question of how to guarantee a priori convergence of this technique, and propose two new algorithms. The first method, called Stable Adaptive Local Iterative Filtering, is a stabilized version of the Adaptive Local Iterative Filtering that we prove to be always convergent. The stability, however, comes at the cost of higher complexity in the calculations. The second technique, called Resampled Iterative Filtering, is a new generalization of the Iterative Filtering method. We prove that Resampled Iterative Filtering is guaranteed to converge a priori for any kind of signal. Furthermore, in the discrete setting, by leveraging on the mathematical properties of the matrices involved, we show that its calculations can be accelerated drastically. Finally, we present some artificial and real-life examples to show the powerfulness and performance of the proposed methods.
We prove a priori and a posteriori error estimates for physics-informed neural networks (PINNs) for linear PDEs. We analyze elliptic equations in primal and mixed form, elasticity, parabolic, hyperbolic and Stokes equations; and a PDE constrained optimization problem. For the analysis, we propose an abstract framework in the common language of bilinear forms, and we show that coercivity and continuity lead to error estimates. The obtained estimates are sharp and reveal that the $L^2$ penalty approach for initial and boundary conditions in the PINN formulation weakens the norm of the error decay. Finally, utilizing recent advances in PINN optimization, we present numerical examples that illustrate the ability of the method to achieve accurate solutions.
Changes in the timescales at which complex systems evolve are essential to predicting critical transitions and catastrophic failures. Disentangling the timescales of the dynamics governing complex systems remains a key challenge. With this study, we introduce an integrated Bayesian framework based on temporal network models to address this challenge. We focus on two methodologies: change point detection for identifying shifts in system dynamics, and a spectrum analysis for inferring the distribution of timescales. Applied to synthetic and empirical datasets, these methologies robustly identify critical transitions and comprehensively map the dominant and subsidiaries timescales in complex systems. This dual approach offers a powerful tool for analyzing temporal networks, significantly enhancing our understanding of dynamic behaviors in complex systems.
Batch Normalization's (BN) unique property of depending on other samples in a batch is known to cause problems in several tasks, including sequence modeling. Yet, BN-related issues are hardly studied for long video understanding, despite the ubiquitous use of BN in CNNs (Convolutional Neural Networks) for feature extraction. Especially in surgical workflow analysis, where the lack of pretrained feature extractors has led to complex, multi-stage training pipelines, limited awareness of BN issues may have hidden the benefits of training CNNs and temporal models end to end. In this paper, we analyze pitfalls of BN in video learning, including issues specific to online tasks such as a 'cheating' effect in anticipation. We observe that BN's properties create major obstacles for end-to-end learning. However, using BN-free backbones, even simple CNN-LSTMs beat the state of the art {\color{\colorrevtwo}on three surgical workflow benchmarks} by utilizing adequate end-to-end training strategies which maximize temporal context. We conclude that awareness of BN's pitfalls is crucial for effective end-to-end learning in surgical tasks. By reproducing results on natural-video datasets, we hope our insights will benefit other areas of video learning as well. Code is available at: \url{//gitlab.com/nct_tso_public/pitfalls_bn}
Recommendation algorithms play a pivotal role in shaping our media choices, which makes it crucial to comprehend their long-term impact on user behavior. These algorithms are often linked to two critical outcomes: homogenization, wherein users consume similar content despite disparate underlying preferences, and the filter bubble effect, wherein individuals with differing preferences only consume content aligned with their preferences (without much overlap with other users). Prior research assumes a trade-off between homogenization and filter bubble effects and then shows that personalized recommendations mitigate filter bubbles by fostering homogenization. However, because of this assumption of a tradeoff between these two effects, prior work cannot develop a more nuanced view of how recommendation systems may independently impact homogenization and filter bubble effects. We develop a more refined definition of homogenization and the filter bubble effect by decomposing them into two key metrics: how different the average consumption is between users (inter-user diversity) and how varied an individual's consumption is (intra-user diversity). We then use a novel agent-based simulation framework that enables a holistic view of the impact of recommendation systems on homogenization and filter bubble effects. Our simulations show that traditional recommendation algorithms (based on past behavior) mainly reduce filter bubbles by affecting inter-user diversity without significantly impacting intra-user diversity. Building on these findings, we introduce two new recommendation algorithms that take a more nuanced approach by accounting for both types of diversity.
Optimal decision-making presents a significant challenge for autonomous systems operating in uncertain, stochastic and time-varying environments. Environmental variability over time can significantly impact the system's optimal decision making strategy for mission completion. To model such environments, our work combines the previous notion of Time-Varying Markov Decision Processes (TVMDP) with partial observability and introduces Time-Varying Partially Observable Markov Decision Processes (TV-POMDP). We propose a two-pronged approach to accurately estimate and plan within the TV-POMDP: 1) Memory Prioritized State Estimation (MPSE), which leverages weighted memory to provide more accurate time-varying transition estimates; and 2) an MPSE-integrated planning strategy that optimizes long-term rewards while accounting for temporal constraint. We validate the proposed framework and algorithms using simulations and hardware, with robots exploring a partially observable, time-varying environments. Our results demonstrate superior performance over standard methods, highlighting the framework's effectiveness in stochastic, uncertain, time-varying domains.
Cloud native technologies have been observed to expand into the realm of Internet of Things (IoT) and Cyber-physical Systems, of which an important application domain is robotics. In this paper, we review the cloudification practice in the robotics domain from both literature and industrial perspectives. We propose RoboKube, an adaptive framework that is based on the Kubernetes (K8s) ecosystem to set up a common platform across the device-cloud continuum for the deployment of cloudified Robotic Operating System (ROS) powered applications, to facilitate the cloud native evolution in robotics. We examine the process of modernizing ROS applications using cloud-native technologies, focusing on both the platform and application perspectives. In addition, we address the challenges of networking setups for heterogeneous environments. This paper intends to serves as a guide for developers and researchers, offering insights into containerization strategies, ROS node distribution and clustering, and deployment options. To demonstrate the feasibility of our approach, we present a case study involving the cloudification of a teleoperation testbed.
Mittag-Leffler correlated noise (M-L noise) plays a crucial role in the dynamics of complex systems, yet the scientific community has lacked tools for its direct generation. Addressing this gap, our work introduces GenML, a Python library specifically designed for generating M-L noise. We detail the architecture and functionalities of GenML and its underlying algorithmic approach, which enables the precise simulation of M-L noise. The effectiveness of GenML is validated through quantitative analyses of autocorrelation functions and diffusion behaviors, showcasing its capability to accurately replicate theoretical noise properties. Our contribution with GenML enables the effective application of M-L noise data in numerical simulation and data-driven methods for describing complex systems, moving beyond mere theoretical modeling.
Text generation with Large Language Models (LLMs) is known to be memory bound due to the combination of their auto-regressive nature, huge parameter counts, and limited memory bandwidths, often resulting in low token rates. Speculative decoding has been proposed as a solution for LLM inference acceleration. However, since draft models are often unavailable in the modern open-source LLM families, e.g., for Llama 2 7B, training a high-quality draft model is required to enable inference acceleration via speculative decoding. In this paper, we propose a simple draft model training framework for direct alignment to chat-capable target models. With the proposed framework, we train Llama 2 Chat Drafter 115M, a draft model for Llama 2 Chat 7B or larger, with only 1.64\% of the original size. Our training framework only consists of pretraining, distillation dataset generation, and finetuning with knowledge distillation, with no additional alignment procedure. For the finetuning step, we use instruction-response pairs generated by target model for distillation in plausible data distribution, and propose a new Total Variation Distance++ (TVD++) loss that incorporates variance reduction techniques inspired from the policy gradient method in reinforcement learning. Our empirical results show that Llama 2 Chat Drafter 115M with speculative decoding achieves up to 2.3 block efficiency and 2.4$\times$ speed-up relative to autoregressive decoding on various tasks with no further task-specific fine-tuning.
The dominating NLP paradigm of training a strong neural predictor to perform one task on a specific dataset has led to state-of-the-art performance in a variety of applications (eg. sentiment classification, span-prediction based question answering or machine translation). However, it builds upon the assumption that the data distribution is stationary, ie. that the data is sampled from a fixed distribution both at training and test time. This way of training is inconsistent with how we as humans are able to learn from and operate within a constantly changing stream of information. Moreover, it is ill-adapted to real-world use cases where the data distribution is expected to shift over the course of a model's lifetime. The first goal of this thesis is to characterize the different forms this shift can take in the context of natural language processing, and propose benchmarks and evaluation metrics to measure its effect on current deep learning architectures. We then proceed to take steps to mitigate the effect of distributional shift on NLP models. To this end, we develop methods based on parametric reformulations of the distributionally robust optimization framework. Empirically, we demonstrate that these approaches yield more robust models as demonstrated on a selection of realistic problems. In the third and final part of this thesis, we explore ways of efficiently adapting existing models to new domains or tasks. Our contribution to this topic takes inspiration from information geometry to derive a new gradient update rule which alleviate catastrophic forgetting issues during adaptation.
Deep Learning algorithms have achieved the state-of-the-art performance for Image Classification and have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass the human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms in order to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been constantly proposed in literature. Nevertheless, devising an efficient defense mechanism has proven to be a difficult task, since many approaches have already shown to be ineffective to adaptive attackers. Thus, this self-containing paper aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, however with a defender's perspective. Here, novel taxonomies for categorizing adversarial attacks and defenses are introduced and discussions about the existence of adversarial examples are provided. Further, in contrast to exisiting surveys, it is also given relevant guidance that should be taken into consideration by researchers when devising and evaluating defenses. Finally, based on the reviewed literature, it is discussed some promising paths for future research.