IoT sensors, especially video cameras, are ubiquitously deployed around the world to perform a variety of computer vision tasks in several verticals including retail, healthcare, safety and security, transportation, manufacturing, etc. To amortize their high deployment effort and cost, it is desirable to perform multiple video analytics tasks, which we refer to as Analytical Units (AUs), off the video feed coming out of every camera. In this paper, we first show that in a multi-AU setting, changing the camera setting has disproportionate impact on different AUs performance. In particular, the optimal setting for one AU may severely degrade the performance for another AU, and further the impact on different AUs varies as the environmental condition changes. We then present Elixir, a system to enhance the video stream quality for multiple analytics on a video stream. Elixir leverages Multi-Objective Reinforcement Learning (MORL), where the RL agent caters to the objectives from different AUs and adjusts the camera setting to simultaneously enhance the performance of all AUs. To define the multiple objectives in MORL, we develop new AU-specific quality estimator values for each individual AU. We evaluate Elixir through real-world experiments on a testbed with three cameras deployed next to each other (overlooking a large enterprise parking lot) running Elixir and two baseline approaches, respectively. Elixir correctly detects 7.1% (22,068) and 5.0% (15,731) more cars, 94% (551) and 72% (478) more faces, and 670.4% (4975) and 158.6% (3507) more persons than the default-setting and time-sharing approaches, respectively. It also detects 115 license plates, far more than the time-sharing approach (7) and the default setting (0).
Wildfires can be devastating, causing significant damage to property, ecosystem disruption, and loss of life. Forecasting the evolution of wildfire boundaries is essential to real-time wildfire management. To this end, substantial attention in the wildifre literature has focused on the level set method, which effectively represents complicated boundaries and their change over time. Nevertheless, most of these approaches rely on a heavily-parameterized formulas for spread and fail to account for the uncertainty in the forecast. The rapid evolution of large wildfires and inhomogeneous environmental conditions across the domain of interest (e.g., varying land cover, fire-induced winds) give rise to a need for a model that enables efficient data-driven learning of fire spread and allows uncertainty quantification. Here, we present a novel hybrid model that nests an echo state network to learn nonlinear spatio-temporal evolving velocities (speed in the normal direction) within a physically-based level set model framework. This model is computationally efficient and includes calibrated uncertainty quantification. We show the forecasting performance of our model with simulations and two real data sets - the Haybress and Thomas megafires that started in California (USA) in 2017.
Reliable uncertainty quantification on RUL prediction is crucial for informative decision-making in predictive maintenance. In this context, we assess some of the latest developments in the field of uncertainty quantification for prognostics deep learning. This includes the state-of-the-art variational inference algorithms for Bayesian neural networks (BNN) as well as popular alternatives such as Monte Carlo Dropout (MCD), deep ensembles (DE) and heteroscedastic neural networks (HNN). All the inference techniques share the same inception deep learning architecture as a functional model. We performed hyperparameter search to optimize the main variational and learning parameters of the algorithms. The performance of the methods is evaluated on a subset of the large NASA NCMAPSS dataset for aircraft engines. The assessment includes RUL prediction accuracy, the quality of predictive uncertainty, and the possibility to break down the total predictive uncertainty into its aleatoric and epistemic parts. The results show no method clearly outperforms the others in all the situations. Although all methods are close in terms of accuracy, we find differences in the way they estimate uncertainty. Thus, DE and MCD generally provide more conservative predictive uncertainty than BNN. Surprisingly, HNN can achieve strong results without the added training complexity and extra parameters of the BNN. For tasks like active learning where a separation of epistemic and aleatoric uncertainty is required, radial BNN and MCD seem the best options.
Geospatial technologies are becoming increasingly essential in our world for a large range of tasks, such as earth monitoring and natural disaster response. To help improve the applicability and performance of deep learning models on these geospatial tasks, various works have pursued the idea of a geospatial foundation model, i.e., training networks from scratch on a large corpus of remote sensing imagery. However, this approach often requires a significant amount of data and training time to achieve suitable performance, especially when employing large state-of-the-art transformer models. In light of these challenges, we investigate a sustainable approach to building geospatial foundation models. In our investigations, we discover two important factors in the process. First, we find that the selection of pretraining data matters, even within the geospatial domain. We therefore gather a concise yet effective dataset for pretraining. Second, we find that available pretrained models on diverse datasets like ImageNet-22k should not be ignored when building geospatial foundation models, as their representations are still surprisingly effective. Rather, by leveraging their representations, we can build strong models for geospatial applications in a sustainable manner. To this end, we formulate a multi-objective continual pretraining approach for training sustainable geospatial foundation models. We experiment on a wide variety of downstream datasets and tasks, achieving strong performance across the board in comparison to ImageNet baselines and state-of-the-art geospatial pretrained models.
We consider the problem of learning the dynamics of a linear system when one has access to data generated by an auxiliary system that shares similar (but not identical) dynamics, in addition to data from the true system. We use a weighted least squares approach, and provide a finite sample error bound of the learned model as a function of the number of samples and various system parameters from the two systems as well as the weight assigned to the auxiliary data. We show that the auxiliary data can help to reduce the intrinsic system identification error due to noise, at the price of adding a portion of error that is due to the differences between the two system models. We further provide a data-dependent bound that is computable when some prior knowledge about the systems is available. This bound can also be used to determine the weight that should be assigned to the auxiliary data during the model training stage.
Inferring chemical reaction networks (CRN) from concentration time series is a challenge encouragedby the growing availability of quantitative temporal data at the cellular level. This motivates thedesign of algorithms to infer the preponderant reactions between the molecular species observed ina given biochemical process, and build CRN structure and kinetics models. Existing ODE-basedinference methods such as SINDy resort to least square regression combined with sparsity-enforcingpenalization, such as Lasso. However, we observe that these methods fail to learn sparse modelswhen the input time series are only available in wild type conditions, i.e. without the possibility toplay with combinations of zeroes in the initial conditions. We present a CRN inference algorithmwhich enforces sparsity by inferring reactions in a sequential fashion within a search tree of boundeddepth, ranking the inferred reaction candidates according to the variance of their kinetics on theirsupporting transitions, and re-optimizing the kinetic parameters of the CRN candidates on the wholetrace in a final pass. We show that Reactmine succeeds both on simulation data by retrievinghidden CRNs where SINDy fails, and on two real datasets, one of fluorescence videomicroscopyof cell cycle and circadian clock markers, the other one of biomedical measurements of systemiccircadian biomarkers possibly acting on clock gene expression in peripheral organs, by inferringpreponderant regulations in agreement with previous model-based analyses. The code is available at//gitlab.inria.fr/julmarti/crninf/ together with introductory notebooks.
Many tasks in video analysis and understanding boil down to the need for frame-based feature learning, aiming to encapsulate the relevant visual content so as to enable simpler and easier subsequent processing. While supervised strategies for this learning task can be envisioned, self and weakly-supervised alternatives are preferred due to the difficulties in getting labeled data. This paper introduces LRProp -- a novel weakly-supervised representation learning approach, with an emphasis on the application of temporal alignment between pairs of videos of the same action category. The proposed approach uses a transformer encoder for extracting frame-level features, and employs the DTW algorithm within the training iterations in order to identify the alignment path between video pairs. Through a process referred to as ``pair-wise position propagation'', the probability distributions of these correspondences per location are matched with the similarity of the frame-level features via KL-divergence minimization. The proposed algorithm uses also a regularized SoftDTW loss for better tuning the learned features. Our novel representation learning paradigm consistently outperforms the state of the art on temporal alignment tasks, establishing a new performance bar over several downstream video analysis applications.
The research in the sixth generation of communication networks needs to tackle new challenges in order to meet the requirements of emerging applications in terms of high data rate, low latency, high reliability, and massive connectivity. To this end, the entire communication chain needs to be optimized, including the channel and the surrounding environment, as it is no longer sufficient to control the transmitter and/or the receiver only. Investigating large intelligent surfaces, ultra massive multiple-input multiple-output, and smart constructive environments will contribute to this direction. In addition, to allow the exchange of high dimensional sensing data between connected intelligent devices, semantic and goal oriented communications need to be considered for a more efficient and context-aware information encoding. In particular, for multi-agent systems, where agents are collaborating together to achieve a complex task, emergent communications, instead of hard coded communications, can be learned for more efficient task execution and communication resources use. Moreover, new physics phenomenon should be exploited such as the thermodynamics of communication as well as the the interaction between information theory and electromagnetism to better understand the physical limitations of different technologies, e.g, holographic communications. Another new communication paradigm is to consider the end-to-end approach instead of block-by-block optimization, which requires exploiting machine learning theory, non-linear signal processing theory, and non-coherent communications theory. Within this context, we identify twelve scientific challenges for rebuilding the theoretical foundations of communications, and we overview each of the challenges while providing research opportunities and open questions for the research community.
Unbiased Learning to Rank (ULTR) that learns to rank documents with biased user feedback data is a well-known challenge in information retrieval. Existing methods in unbiased learning to rank typically rely on click modeling or inverse propensity weighting (IPW). Unfortunately, the search engines are faced with severe long-tail query distribution, where neither click modeling nor IPW can handle well. Click modeling suffers from data sparsity problem since the same query-document pair appears limited times on tail queries; IPW suffers from high variance problem since it is highly sensitive to small propensity score values. Therefore, a general debiasing framework that works well under tail queries is in desperate need. To address this problem, we propose a model-based unbiased learning-to-rank framework. Specifically, we develop a general context-aware user simulator to generate pseudo clicks for unobserved ranked lists to train rankers, which addresses the data sparsity problem. In addition, considering the discrepancy between pseudo clicks and actual clicks, we take the observation of a ranked list as the treatment variable and further incorporate inverse propensity weighting with pseudo labels in a doubly robust way. The derived bias and variance indicate that the proposed model-based method is more robust than existing methods. Finally, extensive experiments on benchmark datasets, including simulated datasets and real click logs, demonstrate that the proposed model-based method consistently performs outperforms state-of-the-art methods in various scenarios. The code is available at //github.com/rowedenny/MULTR.
In large-scale systems there are fundamental challenges when centralised techniques are used for task allocation. The number of interactions is limited by resource constraints such as on computation, storage, and network communication. We can increase scalability by implementing the system as a distributed task-allocation system, sharing tasks across many agents. However, this also increases the resource cost of communications and synchronisation, and is difficult to scale. In this paper we present four algorithms to solve these problems. The combination of these algorithms enable each agent to improve their task allocation strategy through reinforcement learning, while changing how much they explore the system in response to how optimal they believe their current strategy is, given their past experience. We focus on distributed agent systems where the agents' behaviours are constrained by resource usage limits, limiting agents to local rather than system-wide knowledge. We evaluate these algorithms in a simulated environment where agents are given a task composed of multiple subtasks that must be allocated to other agents with differing capabilities, to then carry out those tasks. We also simulate real-life system effects such as networking instability. Our solution is shown to solve the task allocation problem to 6.7% of the theoretical optimal within the system configurations considered. It provides 5x better performance recovery over no-knowledge retention approaches when system connectivity is impacted, and is tested against systems up to 100 agents with less than a 9% impact on the algorithms' performance.
Image-to-image translation (I2I) aims to transfer images from a source domain to a target domain while preserving the content representations. I2I has drawn increasing attention and made tremendous progress in recent years because of its wide range of applications in many computer vision and image processing problems, such as image synthesis, segmentation, style transfer, restoration, and pose estimation. In this paper, we provide an overview of the I2I works developed in recent years. We will analyze the key techniques of the existing I2I works and clarify the main progress the community has made. Additionally, we will elaborate on the effect of I2I on the research and industry community and point out remaining challenges in related fields.