Accurate electricity demand forecasting is crucial to meet energy security and efficiency, especially when relying on intermittent renewable energy sources. Recently, massive savings have been observed in Europe, following an unprecedented global energy crisis. However, assessing the impact of such crisis and of government incentives on electricity consumption behaviour is challenging. Moreover, standard statistical models based on meteorological and calendar data have difficulty adapting to such brutal changes. Here, we show that mobility indices based on mobile network data significantly improve the performance of the state-of-the-art models in electricity demand forecasting during the sobriety period. We start by documenting the drop in the French electricity consumption during the winter of 2022-2023. We then show how our mobile network data captures work dynamics and how adding these mobility indices outperforms the state-of-the-art during this atypical period. Our results characterise the effect of work behaviours on the electricity demand.
In evidence synthesis, effect modifiers are typically described as variables that induce treatment effect heterogeneity at the individual level, through treatment-covariate interactions in an outcome model parametrized at such level. As such, effect modification is defined with respect to a conditional measure, but marginal effect estimates are required for population-level decisions in health technology assessment. For non-collapsible measures, purely prognostic variables that are not determinants of treatment response at the individual level may modify marginal effects, even where there is individual-level treatment effect homogeneity. With heterogeneity, marginal effects for measures that are not directly collapsible cannot be expressed in terms of marginal covariate moments, and generally depend on the joint distribution of conditional effect measure modifiers and purely prognostic variables. There are implications for recommended practices in evidence synthesis. Unadjusted anchored indirect comparisons can be biased in the absence of individual-level treatment effect heterogeneity, or when marginal covariate moments are balanced across studies. Covariate adjustment may be necessary to account for cross-study imbalances in joint covariate distributions involving purely prognostic variables. In the absence of individual patient data for the target, covariate adjustment approaches are inherently limited in their ability to remove bias for measures that are not directly collapsible. Directly collapsible measures would facilitate the transportability of marginal effects between studies by: (1) reducing dependence on model-based covariate adjustment where there is individual-level treatment effect homogeneity and marginal covariate moments are balanced; and (2) facilitating the selection of baseline covariates for adjustment where there is individual-level treatment effect heterogeneity.
Regularized imaging spectroscopy was introduced for the construction of electron flux images at different energies from count visibilities recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). In this work we seek to extend this approach to data from the Spectrometer/Telescope for Imaging X-rays (STIX) on-board the Solar Orbiter mission. Our aims are to demonstrate the feasibility of regularized imaging spectroscopy as a method for analysis of STIX data, and also to show how such analysis can lead to insights into the physical processes affecting the nonthermal electrons responsible for the hard X-ray emission observed by STIX. STIX records imaging data in an intrinsically different manner from RHESSI. Rather than sweeping the angular frequency plane in a set of concentric circles (one circle per detector), STIX uses $30$ collimators, each corresponding to a specific angular frequency. In this paper we derive an appropriate modification of the previous computational approach for the analysis of the visibilities observed by STIX. This approach also allows for the observed count data to be placed into non-uniformly-spaced energy bins. We show that the regularized imaging spectroscopy approach is not only feasible for analysis of the visibilities observed by STIX, but also more reliable. Application of the regularized imaging spectroscopy technique to several well-observed flares reveals details of the variation of the electron flux spectrum throughout the flare sources. We conclude that the visibility-based regularized imaging spectroscopy approach is well-suited to analysis of STIX data. We also use STIX electron flux spectral images to track, for the first time, the behavior of the accelerated electrons during their path from the acceleration site in the solar corona toward the chromosphere
We investigate the spectrum of differentiation matrices for certain operators on the sphere that are generated from collocation at a set of scattered points $X$ with positive definite and conditionally positive definite kernels. We focus on the case where these matrices are constructed from collocation using all the points in $X$ and from local subsets of points (or stencils) in $X$. The former case is often referred to as the global, Kansa, or pseudospectral method, while the latter is referred to as the local radial basis function (RBF) finite difference (RBF-FD) method. Both techniques are used extensively for numerically solving certain partial differential equations (PDEs) on spheres (and other domains). For time-dependent PDEs on spheres like the (surface) diffusion equation, the spectrum of the differentiation matrices and their stability under perturbations are central to understanding the temporal stability of the underlying numerical schemes. In the global case, we present a perturbation estimate for differentiation matrices which discretize operators that commute with the Laplace-Beltrami operator. In doing so, we demonstrate that if such an operator has negative (non-positive) spectrum, then the differentiation matrix does, too (i.e., it is Hurwitz stable). For conditionally positive definite kernels this is particularly challenging since the differentiation matrices are not necessarily diagonalizable. This perturbation theory is then used to obtain bounds on the spectra of the local RBF-FD differentiation matrices based on the conditionally positive definite surface spline kernels. Numerical results are presented to confirm the theoretical estimates.
This study explores the integration of the hyper-power sequence, a method commonly employed for approximating the Moore-Penrose inverse, to enhance the effectiveness of an existing preconditioner. The approach is closely related to polynomial preconditioning based on Neumann series. We commence with a state-of-the-art matrix-free preconditioner designed for the saddle point system derived from isogeometric structure-preserving discretization of the Stokes equations. Our results demonstrate that incorporating multiple iterations of the hyper-power method enhances the effectiveness of the preconditioner, leading to a substantial reduction in both iteration counts and overall solution time for simulating Stokes flow within a 3D lid-driven cavity. Through a comprehensive analysis, we assess the stability, accuracy, and numerical cost associated with the proposed scheme.
Ground settlement prediction during the process of mechanized tunneling is of paramount importance and remains a challenging research topic. Typically, two paradigms are existing: a physics-driven approach utilizing process-oriented computational simulation models for the tunnel-soil interaction and the settlement prediction, and a data-driven approach employing machine learning techniques to establish mappings between influencing factors and the ground settlement. To integrate the advantages of both approaches and to assimilate the data from different sources, we propose a multi-fidelity deep operator network (DeepONet) framework, leveraging the recently developed operator learning methods. The presented framework comprises of two components: a low-fidelity subnet that captures the fundamental ground settlement patterns obtained from finite element simulations, and a high-fidelity subnet that learns the nonlinear correlation between numerical models and real engineering monitoring data. A pre-processing strategy for causality is adopted to consider the spatio-temporal characteristics of the settlement during tunnel excavation. Transfer learning is utilized to reduce the training cost for the low-fidelity subnet. The results show that the proposed method can effectively capture the physical information provided by the numerical simulations and accurately fit measured data as well. Remarkably, even with very limited noisy monitoring data, the proposed model can achieve rapid, accurate, and robust predictions of the full-field ground settlement in real-time during mechanized tunnel excavation.
Over the recent past data-driven algorithms for solving stochastic optimal control problems in face of model uncertainty have become an increasingly active area of research. However, for singular controls and underlying diffusion dynamics the analysis has so far been restricted to the scalar case. In this paper we fill this gap by studying a multivariate singular control problem for reversible diffusions with controls of reflection type. Our contributions are threefold. We first explicitly determine the long-run average costs as a domain-dependent functional, showing that the control problem can be equivalently characterized as a shape optimization problem. For given diffusion dynamics, assuming the optimal domain to be strongly star-shaped, we then propose a gradient descent algorithm based on polytope approximations to numerically determine a cost-minimizing domain. Finally, we investigate data-driven solutions when the diffusion dynamics are unknown to the controller. Using techniques from nonparametric statistics for stochastic processes, we construct an optimal domain estimator, whose static regret is bounded by the minimax optimal estimation rate of the unreflected process' invariant density. In the most challenging situation, when the dynamics must be learned simultaneously to controlling the process, we develop an episodic learning algorithm to overcome the emerging exploration-exploitation dilemma and show that given the static regret as a baseline, the loss in its sublinear regret per time unit is of natural order compared to the one-dimensional case.
With the remarkable progress that technology has made, the need for processing data near the sensors at the edge has increased dramatically. The electronic systems used in these applications must process data continuously, in real-time, and extract relevant information using the smallest possible energy budgets. A promising approach for implementing always-on processing of sensory signals that supports on-demand, sparse, and edge-computing is to take inspiration from biological nervous system. Following this approach, we present a brain-inspired platform for prototyping real-time event-based Spiking Neural Networks (SNNs). The system proposed supports the direct emulation of dynamic and realistic neural processing phenomena such as short-term plasticity, NMDA gating, AMPA diffusion, homeostasis, spike frequency adaptation, conductance-based dendritic compartments and spike transmission delays. The analog circuits that implement such primitives are paired with a low latency asynchronous digital circuits for routing and mapping events. This asynchronous infrastructure enables the definition of different network architectures, and provides direct event-based interfaces to convert and encode data from event-based and continuous-signal sensors. Here we describe the overall system architecture, we characterize the mixed signal analog-digital circuits that emulate neural dynamics, demonstrate their features with experimental measurements, and present a low- and high-level software ecosystem that can be used for configuring the system. The flexibility to emulate different biologically plausible neural networks, and the chip's ability to monitor both population and single neuron signals in real-time, allow to develop and validate complex models of neural processing for both basic research and edge-computing applications.
The angular measure on the unit sphere characterizes the first-order dependence structure of the components of a random vector in extreme regions and is defined in terms of standardized margins. Its statistical recovery is an important step in learning problems involving observations far away from the center. In the common situation that the components of the vector have different distributions, the rank transformation offers a convenient and robust way of standardizing data in order to build an empirical version of the angular measure based on the most extreme observations. We provide a functional asymptotic expansion for the empirical angular measure in the bivariate case based on the theory of weak convergence in the space of bounded functions. From the expansion, not only can the known asymptotic distribution of the empirical angular measure be recovered, it also enables to find expansions and weak limits for other statistics based on the associated empirical process or its quantile version.
Chaotic systems make long-horizon forecasts difficult because small perturbations in initial conditions cause trajectories to diverge at an exponential rate. In this setting, neural operators trained to minimize squared error losses, while capable of accurate short-term forecasts, often fail to reproduce statistical or structural properties of the dynamics over longer time horizons and can yield degenerate results. In this paper, we propose an alternative framework designed to preserve invariant measures of chaotic attractors that characterize the time-invariant statistical properties of the dynamics. Specifically, in the multi-environment setting (where each sample trajectory is governed by slightly different dynamics), we consider two novel approaches to training with noisy data. First, we propose a loss based on the optimal transport distance between the observed dynamics and the neural operator outputs. This approach requires expert knowledge of the underlying physics to determine what statistical features should be included in the optimal transport loss. Second, we show that a contrastive learning framework, which does not require any specialized prior knowledge, can preserve statistical properties of the dynamics nearly as well as the optimal transport approach. On a variety of chaotic systems, our method is shown empirically to preserve invariant measures of chaotic attractors.
One essential problem in quantifying the collective behaviors of molecular systems lies in the accurate construction of free energy surfaces (FESs). The main challenges arise from the prevalence of energy barriers and the high dimensionality. Existing approaches are often based on sophisticated enhanced sampling methods to establish efficient exploration of the full-phase space. On the other hand, the collection of optimal sample points for the numerical approximation of FESs remains largely under-explored, where the discretization error could become dominant for systems with a large number of collective variables (CVs). We propose a consensus sampling-based approach by reformulating the construction as a minimax problem which simultaneously optimizes the function representation and the training set. In particular, the maximization step establishes a stochastic interacting particle system to achieve the adaptive sampling of the max-residue regime by modulating the exploitation of the Laplace approximation of the current loss function and the exploration of the uncharted phase space; the minimization step updates the FES approximation with the new training set. By iteratively solving the minimax problem, the present method essentially achieves an adversarial learning of the FESs with unified tasks for both phase space exploration and posterior error-enhanced sampling. We demonstrate the method by constructing the FESs of molecular systems with a number of CVs up to 30.