Forecast combination involves using multiple forecasts to create a single, more accurate prediction. Recently, feature-based forecasting has been employed to either select the most appropriate forecasting models or to learn the weights of their convex combination. In this paper, we present a multi-task learning methodology that simultaneously addresses both problems. This approach is implemented through a deep neural network with two branches: the regression branch, which learns the weights of various forecasting methods by minimizing the error of combined forecasts, and the classification branch, which selects forecasting methods with an emphasis on their diversity. To generate training labels for the classification task, we introduce an optimization-driven approach that identifies the most appropriate methods for a given time series. The proposed approach elicits the essential role of diversity in feature-based forecasting and highlights the interplay between model combination and model selection when learning forecasting ensembles. Experimental results on a large set of series from the M4 competition dataset show that our proposal enhances point forecast accuracy compared to state-of-the-art methods.
We introduce a new technique called Drapes to enhance the sensitivity in searches for new physics at the LHC. By training diffusion models on side-band data, we show how background templates for the signal region can be generated either directly from noise, or by partially applying the diffusion process to existing data. In the partial diffusion case, data can be drawn from side-band regions, with the inverse diffusion performed for new target conditional values, or from the signal region, preserving the distribution over the conditional property that defines the signal region. We apply this technique to the hunt for resonances using the LHCO di-jet dataset, and achieve state-of-the-art performance for background template generation using high level input features. We also show how Drapes can be applied to low level inputs with jet constituents, reducing the model dependence on the choice of input observables. Using jet constituents we can further improve sensitivity to the signal process, but observe a loss in performance where the signal significance before applying any selection is below 4$\sigma$.
Compared to large speech foundation models, small distilled models exhibit degraded noise robustness. The student's robustness can be improved by introducing noise at the inputs during pre-training. Despite this, using the standard distillation loss still yields a student with degraded performance. Thus, this paper proposes improving student robustness via distillation with correlation metrics. Teacher behavior is learned by maximizing the teacher and student cross-correlation matrix between their representations towards identity. Noise robustness is encouraged via the student's self-correlation minimization. The proposed method is agnostic of the teacher model and consistently outperforms the previous approach. This work also proposes an heuristic to weigh the importance of the two correlation terms automatically. Experiments show consistently better clean and noise generalization on Intent Classification, Keyword Spotting, and Automatic Speech Recognition tasks on SUPERB Challenge.
In this paper, we propose and analyze a diffuse interface model for inductionless magnetohydrodynamic fluids. The model couples a convective Cahn-Hilliard equation for the evolution of the interface, the Navier-Stokes system for fluid flow and the possion quation for electrostatics. The model is derived from Onsager's variational principle and conservation laws systematically. We perform formally matched asymptotic expansions and develop several sharp interface models in the limit when the interfacial thickness tends to zero. It is shown that the sharp interface limit of the models are the standard incompressible inductionless magnetohydrodynamic equations coupled with several different interface conditions for different choice of the mobilities. Numerical results verify the convergence of the diffuse interface model with different mobilitiess.
Introducing a coupling framework reminiscent of FETI methods, but here on abstract form, we establish conditions for stability and minimal requirements for well-posedness on the continuous level, as well as conditions on local solvers for the approximation of subproblems. We then discuss stability of the resulting Lagrange multiplier methods and show stability under a mesh conditions between the local discretizations and the mortar space. If this condition is not satisfied we show how a stabilization, acting only on the multiplier can be used to achieve stability. The design of preconditioners of the Schur complement system is discussed in the unstabilized case. Finally we discuss some applications that enter the framework.
Substantial efforts have been made in developing various Decision Modeling formalisms, both from industry and academia. A challenging problem is that of expressing decision knowledge in the context of incomplete knowledge. In such contexts, decisions depend on what is known or not known. We argue that none of the existing formalisms for modeling decisions are capable of correctly capturing the epistemic nature of such decisions, inevitably causing issues in situations of uncertainty. This paper presents a new language for modeling decisions with incomplete knowledge. It combines three principles: stratification, autoepistemic logic, and definitions. A knowledge base in this language is a hierarchy of epistemic theories, where each component theory may epistemically reason on the knowledge in lower theories, and decisions are made using definitions with epistemic conditions.
We consider logistic regression including two sets of discrete or categorical covariates that are missing at random (MAR) separately or simultaneously. We examine the asymptotic properties of two multiple imputation (MI) estimators, given in the study of Lee at al. (2023), for the parameters of the logistic regression model with both sets of discrete or categorical covariates that are MAR separately or simultaneously. The proposed estimated asymptotic variances of the two MI estimators address a limitation observed with Rubin's type estimated variances, which lead to underestimate the variances of the two MI estimators (Rubin, 1987). Simulation results demonstrate that our two proposed MI methods outperform the complete-case, semiparametric inverse probability weighting, random forest MI using chained equations, and stochastic approximation of expectation-maximization methods. To illustrate the methodology's practical application, we provide a real data example from a survey conducted in the Feng Chia night market in Taichung City, Taiwan.
This manuscript aims to formalize and conclude the discussions initiated during the PriTEM workshop 22-23 March 2023. We present important ideas and discussion topics in the context of transactive energy systems. Moreover, the conclusions from the discussions articulate potential aspects to be explored in future studies on transactive energy management. Particularly, these conclusions cover research topics in energy technology and energy informatics, energy law, data law, energy market and socio-psychology that are relevant to the seamless integration of renewable energy resources and the transactive energy systems-in smart microgrids-focusing on distributed frameworks such as peer-to-peer (P2P) energy trading. We clarify issues, identify barriers, and suggest possible solutions to open questions in diversified topics, such as block-chain interoperability, consumer privacy and data sharing, and participation incentivization. Furthermore, we also elaborate challenges associated with cross-disciplinary collaboration and coordination for transactive energy systems, and enumerate the lessons learned from our work so far.
The aim of this paper is to give a systematic mathematical interpretation of the diffusion problem on which Graph Neural Networks (GNNs) models are based. The starting point of our approach is a dissipative functional leading to dynamical equations which allows us to study the symmetries of the model. We discuss the conserved charges and provide a charge-preserving numerical method for solving the dynamical equations. In any dynamical system and also in GRAph Neural Diffusion (GRAND), knowing the charge values and their conservation along the evolution flow could provide a way to understand how GNNs and other networks work with their learning capabilities.
Spatial data can come in a variety of different forms, but two of the most common generating models for such observations are random fields and point processes. Whilst it is known that spectral analysis can unify these two different data forms, specific methodology for the related estimation is yet to be developed. In this paper, we solve this problem by extending multitaper estimation, to estimate the spectral density matrix function for multivariate spatial data, where processes can be any combination of either point processes or random fields. We discuss finite sample and asymptotic theory for the proposed estimators, as well as specific details on the implementation, including how to perform estimation on non-rectangular domains and the correct implementation of multitapering for processes sampled in different ways, e.g. continuously vs on a regular grid.
Generative models inspired by dynamical transport of measure -- such as flows and diffusions -- construct a continuous-time map between two probability densities. Conventionally, one of these is the target density, only accessible through samples, while the other is taken as a simple base density that is data-agnostic. In this work, using the framework of stochastic interpolants, we formalize how to \textit{couple} the base and the target densities, whereby samples from the base are computed conditionally given samples from the target in a way that is different from (but does preclude) incorporating information about class labels or continuous embeddings. This enables us to construct dynamical transport maps that serve as conditional generative models. We show that these transport maps can be learned by solving a simple square loss regression problem analogous to the standard independent setting. We demonstrate the usefulness of constructing dependent couplings in practice through experiments in super-resolution and in-painting.