Forecasting of renewable energy generation provides key insights which may help with decision-making towards global decarbonisation. Renewable energy generation can often be represented through cross-sectional hierarchies, whereby a single farm may have multiple individual generators. Hierarchical forecasting through reconciliation has demonstrated a significant increase in the quality of forecasts both theoretically and empirically. However, it is not evident whether forecasts generated by individual temporal and cross-sectional aggregation can be superior to integrated cross-temporal forecasts and to individual forecasts on more granular data. In this study, we investigate the accuracies of different cross-sectional and cross-temporal reconciliation methods using both linear regression and gradient boosting machine learning for forecasting wind farm power generation. We found that cross-temporal reconciliation is superior to individual cross-sectional reconciliation at multiple temporal aggregations. Cross-temporally reconciled machine learning base forecasts also demonstrated a high accuracy at coarser temporal granularities, which may encourage adoption for short-term wind forecasts. We also show that linear regression can outperform machine learning models across most levels in cross-sectional wind time series.
Weight-sharing quantization has emerged as a technique to reduce energy expenditure during inference in large neural networks by constraining their weights to a limited set of values. However, existing methods for weight-sharing quantization often make assumptions about the treatment of weights based on value alone that neglect the unique role weight position plays. This paper proposes a probabilistic framework based on Bayesian neural networks (BNNs) and a variational relaxation to identify which weights can be moved to which cluster centre and to what degree based on their individual position-specific learned uncertainty distributions. We introduce a new initialisation setting and a regularisation term which allow for the training of BNNs under complex dataset-model combinations. By leveraging the flexibility of weight values captured through a probability distribution, we enhance noise resilience and downstream compressibility. Our iterative clustering procedure demonstrates superior compressibility and higher accuracy compared to state-of-the-art methods on both ResNet models and the more complex transformer-based architectures. In particular, our method outperforms the state-of-the-art quantization method top-1 accuracy by 1.6% on ImageNet using DeiT-Tiny, with its 5 million+ weights now represented by only 296 unique values.
Model averaging (MA), a technique for combining estimators from a set of candidate models, has attracted increasing attention in machine learning and statistics. In the existing literature, there is an implicit understanding that MA can be viewed as a form of shrinkage estimation that draws the response vector towards the subspaces spanned by the candidate models. This paper explores this perspective by establishing connections between MA and shrinkage in a linear regression setting with multiple nested models. We first demonstrate that the optimal MA estimator is the best linear estimator with monotone non-increasing weights in a Gaussian sequence model. The Mallows MA, which estimates weights by minimizing the Mallows' $C_p$, is a variation of the positive-part Stein estimator. Motivated by these connections, we develop a novel MA procedure based on a blockwise Stein estimation. Our resulting Stein-type MA estimator is asymptotically optimal across a broad parameter space when the variance is known. Numerical results support our theoretical findings. The connections established in this paper may open up new avenues for investigating MA from different perspectives. A discussion on some topics for future research concludes the paper.
We consider the problem of estimating the marginal independence structure of a Bayesian network from observational data in the form of an undirected graph called the unconditional dependence graph. We show that unconditional dependence graphs of Bayesian networks correspond to the graphs having equal independence and intersection numbers. Using this observation, a Gr\"obner basis for a toric ideal associated to unconditional dependence graphs of Bayesian networks is given and then extended by additional binomial relations to connect the space of all such graphs. An MCMC method, called GrUES (Gr\"obner-based Unconditional Equivalence Search), is implemented based on the resulting moves and applied to synthetic Gaussian data. GrUES recovers the true marginal independence structure via a penalized maximum likelihood or MAP estimate at a higher rate than simple independence tests while also yielding an estimate of the posterior, for which the $20\%$ HPD credible sets include the true structure at a high rate for data-generating graphs with density at least $0.5$.
We construct a family of Markov decision processes for which the policy iteration algorithm needs an exponential number of improving switches with Dantzig's rule, with Bland's rule, and with the Largest Increase pivot rule. This immediately translates to a family of linear programs for which the simplex algorithm needs an exponential number of pivot steps with the same three pivot rules. Our results yield a unified construction that simultaneously reproduces well-known lower bounds for these classical pivot rules, and we are able to infer that any (deterministic or randomized) combination of them cannot avoid an exponential worst-case behavior. Regarding the policy iteration algorithm, pivot rules typically switch multiple edges simultaneously and our lower bound for Dantzig's rule and the Largest Increase rule, which perform only single switches, seem novel. Regarding the simplex algorithm, the individual lower bounds were previously obtained separately via deformed hypercube constructions. In contrast to previous bounds for the simplex algorithm via Markov decision processes, our rigorous analysis is reasonably concise.
This paper investigates the problem of estimating the larger location parameter of two general location families from a decision-theoretic perspective. In this estimation problem, we use the criteria of minimizing the risk function and the Pitman closeness under a general bowl-shaped loss function. Inadmissibility of a general location and equivariant estimators is provided. We prove that a natural estimator (analogue of the BLEE of unordered location parameters) is inadmissible, under certain conditions on underlying densities, and propose a dominating estimator. We also derive a class of improved estimators using the Kubokawa's IERD approach and observe that the boundary estimator of this class is the Brewster-Zidek type estimator. Additionally, under the generalized Pitman criterion, we show that the natural estimator is inadmissible and obtain improved estimators. The results are implemented for different loss functions, and explicit expressions for the dominating estimators are provided. We explore the applications of these results to for exponential and normal distribution under specified loss functions. A simulation is also conducted to compare the risk performance of the proposed estimators. Finally, we present a real-life data analysis to illustrate the practical applications of the paper's findings.
We evaluate using Julia as a single language and ecosystem paradigm powered by LLVM to develop workflow components for high-performance computing. We run a Gray-Scott, 2-variable diffusion-reaction application using a memory-bound, 7-point stencil kernel on Frontier, the US Department of Energy's first exascale supercomputer. We evaluate the feasibility, performance, scaling, and trade-offs of (i) the computational kernel on AMD's MI250x GPUs, (ii) weak scaling up to 4,096 MPI processes/GPUs or 512 nodes, (iii) parallel I/O writes using the ADIOS2 library bindings, and (iv) Jupyter Notebooks for interactive data analysis. Our results suggest that although Julia generates a reasonable LLVM-IR kernel, a nearly 50% performance difference exists vs. native AMD HIP stencil codes when running on the GPUs. As expected, we observed near-zero overhead when using MPI and parallel I/O bindings for system-wide installed implementations. Consequently, Julia emerges as a compelling high-performance and high-productivity workflow composition strategy, as measured on the fastest supercomputer in the world.
Quasiperiodic systems, related to irrational numbers, are space-filling structures without decay nor translation invariance. How to accurately recover these systems, especially for non-smooth cases, presents a big challenge in numerical computation. In this paper, we propose a new algorithm, finite points recovery (FPR) method, which is available for both smooth and non-smooth cases, to address this challenge. The FPR method first establishes a homomorphism between the lower-dimensional definition domain of the quasiperiodic function and the higher-dimensional torus, then recovers the global quasiperiodic system by employing interpolation technique with finite points in the definition domain without dimensional lifting. Furthermore, we develop accurate and efficient strategies of selecting finite points according to the arithmetic properties of irrational numbers. The corresponding mathematical theory, convergence analysis, and computational complexity analysis on choosing finite points are presented. Numerical experiments demonstrate the effectiveness and superiority of FPR approach in recovering both smooth quasiperiodic functions and piecewise constant Fibonacci quasicrystals. While existing spectral methods encounter difficulties in accurately recovering non-smooth quasiperiodic functions.
In this work, a recently proposed high-cycle fatigue cohesive zone model, which covers crack initiation and propagation with limited input parameters, is embedded in a robust and efficient numerical framework for simulating progressive failure in composite laminates under fatigue loading. The fatigue cohesive zone model is enhanced with an implicit time integration scheme of the fatigue damage variable which allows for larger cycle increments and more efficient analyses. The method is combined with an adaptive strategy for determining the cycle increment based on global convergence rates. Moreover, a consistent material tangent stiffness matrix has been derived by fully linearizing the underlying mixed-mode quasi-static model and the fatigue damage update. The enhanced fatigue cohesive zone model is used to describe matrix cracking and delamination in laminates. In order to allow for matrix cracks to initiate at arbitrary locations and to avoid complex and costly mesh generation, the phantom node version of the eXtended finite element method (XFEM) is employed. For the insertion of new crack segments, an XFEM fatigue crack insertion criterion is presented, which is consistent with the fatigue cohesive zone formulation. It is shown with numerical examples that the improved fatigue damage update enhances the accuracy, efficiency and robustness of the numerical simulations significantly. The numerical framework is applied to the simulation of progressive fatigue failure in an open-hole [$\pm$45]-laminate. It is demonstrated that the numerical model is capable of accurately and efficiently simulating the complete failure process from distributed damage to localized failure.
Mitigating climate change requires a transition away from fossil fuels towards renewable energy. As a result, power generation becomes more volatile and options for microgrids and islanded power-grid operation are being broadly discussed. Therefore, studying the power grids of physical islands, as a model for islanded microgrids, is of particular interest when it comes to enhancing our understanding of power-grid stability. In the present paper, we investigate the statistical properties of the power-grid frequency of three island systems: Iceland, Ireland, and the Balearic Islands. We utilise a Fokker-Planck approach to construct stochastic differential equations that describe market activities, control, and noise acting on power-grid dynamics. Using the obtained parameters we create synthetic time series of the frequency dynamics. Our main contribution is to propose two extensions of stochastic power-grid frequency models and showcase the applicability of these new models to non-Gaussian statistics, as encountered in islands.
We propose an approach to compute inner and outer-approximations of the sets of values satisfying constraints expressed as arbitrarily quantified formulas. Such formulas arise for instance when specifying important problems in control such as robustness, motion planning or controllers comparison. We propose an interval-based method which allows for tractable but tight approximations. We demonstrate its applicability through a series of examples and benchmarks using a prototype implementation.