Material Flow Analysis (MFA) is used to quantify and understand the life cycles of materials from production to end of use, which enables environmental, social and economic impacts and interventions. MFA is challenging as available data is often limited and uncertain, leading to an underdetermined system with an infinite number of possible stocks and flows values. Bayesian statistics is an effective way to address these challenges by principally incorporating domain knowledge, and quantifying uncertainty in the data and providing probabilities associated with model solutions. This paper presents a novel MFA methodology under the Bayesian framework. By relaxing the mass balance constraints, we improve the computational scalability and reliability of the posterior samples compared to existing Bayesian MFA methods. We propose a mass based, child and parent process framework to model systems with disaggregated processes and flows. We show posterior predictive checks can be used to identify inconsistencies in the data and aid noise and hyperparameter selection. The proposed approach is demonstrated on case studies, including a global aluminium cycle with significant disaggregation, under weakly informative priors and significant data gaps to investigate the feasibility of Bayesian MFA. We illustrate just a weakly informative prior can greatly improve the performance of Bayesian methods, for both estimation accuracy and uncertainty quantification.
We consider the problem of efficiently simulating stochastic models of chemical kinetics. The Gillespie Stochastic Simulation algorithm (SSA) is often used to simulate these models, however, in many scenarios of interest, the computational cost quickly becomes prohibitive. This is further exasperated in the Bayesian inference context when estimating parameters of chemical models, as the intractability of the likelihood requires multiple simulations of the underlying system. To deal with issues of computational complexity in this paper, we propose a novel hybrid $\tau$-leap algorithm for simulating well-mixed chemical systems. In particular, the algorithm uses $\tau$-leap when appropriate (high population densities), and SSA when necessary (low population densities, when discrete effects become non-negligible). In the intermediate regime, a combination of the two methods, which leverages the properties of the underlying Poisson formulation, is employed. As illustrated through a number of numerical experiments the hybrid $\tau$ offers significant computational savings when compared to SSA without however sacrificing the overall accuracy. This feature is particularly welcomed in the Bayesian inference context, as it allows for parameter estimation of stochastic chemical kinetics at reduced computational cost.
This paper delves into the equivalence problem of Smith forms for multivariate polynomial matrices. Generally speaking, multivariate ($n \geq 2$) polynomial matrices and their Smith forms may not be equivalent. However, under certain specific condition, we derive the necessary and sufficient condition for their equivalence. Let $F\in K[x_1,\ldots,x_n]^{l\times m}$ be of rank $r$, $d_r(F)\in K[x_1]$ be the greatest common divisor of all the $r\times r$ minors of $F$, where $K$ is a field, $x_1,\ldots,x_n$ are variables and $1 \leq r \leq \min\{l,m\}$. Our key findings reveal the result: $F$ is equivalent to its Smith form if and only if all the $i\times i$ reduced minors of $F$ generate $K[x_1,\ldots,x_n]$ for $i=1,\ldots,r$.
Tensor-based morphometry (TBM) aims at showing local differences in brain volumes with respect to a common template. TBM images are smooth but they exhibit (especially in diseased groups) higher values in some brain regions called lateral ventricles. More specifically, our voxelwise analysis shows both a mean-variance relationship in these areas and evidence of spatially dependent skewness. We propose a model for 3-dimensional functional data where mean, variance, and skewness functions vary smoothly across brain locations. We model the voxelwise distributions as skew-normal. The smooth effects of age and sex are estimated on a reference population of cognitively normal subjects from the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset and mapped across the whole brain. The three parameter functions allow to transform each TBM image (in the reference population as well as in a test set) into a Gaussian process. These subject-specific normative maps are used to derive indices of deviation from a healthy condition to assess the individual risk of pathological degeneration.
Although Bayesian skew-normal models are useful for flexibly modeling spatio-temporal processes, they still have difficulty in computation cost and interpretability in their mean and variance parameters, including regression coefficients. To address these problems, this study proposes a spatio-temporal model that incorporates skewness while maintaining mean and variance, by applying the flexible subclass of the closed skew-normal distribution. An efficient sampling method is introduced, leveraging the autoregressive representation of the model. Additionally, the model's symmetry concerning spatial order is demonstrated, and Mardia's skewness and kurtosis are derived, showing independence from the mean and variance. Simulation studies compare the estimation performance of the proposed model with that of the Gaussian model. The result confirms its superiority in high skewness and low observation noise scenarios. The identification of Cobb-Douglas production functions across US states is examined as an application to real data, revealing that the proposed model excels in both goodness-of-fit and predictive performance.
Time variation and persistence are crucial properties of volatility that are often studied separately in energy volatility forecasting models. Here, we propose a novel approach that allows shocks with heterogeneous persistence to vary smoothly over time, and thus model the two together. We argue that this is important because such dynamics arise naturally from the dynamic nature of shocks in energy commodities. We identify such dynamics from the data using localised regressions and build a model that significantly improves volatility forecasts. Such forecasting models, based on a rich persistence structure that varies smoothly over time, outperform state-of-the-art benchmark models and are particularly useful for forecasting over longer horizons.
Although passivization is productive in English, it is not completely general -- some exceptions exist (e.g. *One hour was lasted by the meeting). How do English speakers learn these exceptions to an otherwise general pattern? Using neural network language models as theories of acquisition, we explore the sources of indirect evidence that a learner can leverage to learn whether a verb can passivize. We first characterize English speakers' judgments of exceptions to the passive, confirming that speakers find some verbs more passivizable than others. We then show that a neural network language model can learn restrictions to the passive that are similar to those displayed by humans, suggesting that evidence for these exceptions is available in the linguistic input. We test the causal role of two hypotheses for how the language model learns these restrictions by training models on modified training corpora, which we create by altering the existing training corpora to remove features of the input implicated by each hypothesis. We find that while the frequency with which a verb appears in the passive significantly affects its passivizability, the semantics of the verb does not. This study highlight the utility of altering a language model's training data for answering questions where complete control over a learner's input is vital.
In this paper we consider the numerical solution of fractional terminal value problems (FDE-TVPs). In particular, the proposed procedure uses a Newton-type iteration which is particularly efficient when coupled with a recently-introduced step-by-step procedure for solving fractional initial value problems (FDE-IVPs), able to produce spectrally accurate solutions of FDE problems. Some numerical tests are reported to make evidence of its effectiveness.
Infrared imagery can help in low-visibility situations such as fog and low-light scenarios, but it is prone to thermal noise and requires further processing and correction. This work studies the effect of different infrared processing pipelines on the performance of a pedestrian detection in an urban environment, similar to autonomous driving scenarios. Detection on infrared images is shown to outperform that on visible images, but the infrared correction pipeline is crucial since the models cannot extract information from raw infrared images. Two thermal correction pipelines are studied, the shutter and the shutterless pipes. Experiments show that some correction algorithms like spatial denoising are detrimental to performance even if they increase visual quality for a human observer. Other algorithms like destriping and, to a lesser extent, temporal denoising, increase computational time, but have some role to play in increasing detection accuracy. As it stands, the optimal trade-off for speed and accuracy is simply to use the shutterless pipe with a tonemapping algorithm only, for autonomous driving applications within varied environments.
In the field of autonomous driving research, the use of immersive virtual reality (VR) techniques is widespread to enable a variety of studies under safe and controlled conditions. However, this methodology is only valid and consistent if the conduct of participants in the simulated setting mirrors their actions in an actual environment. In this paper, we present a first and innovative approach to evaluating what we term the behavioural gap, a concept that captures the disparity in a participant's conduct when engaging in a VR experiment compared to an equivalent real-world situation. To this end, we developed a digital twin of a pre-existed crosswalk and carried out a field experiment (N=18) to investigate pedestrian-autonomous vehicle interaction in both real and simulated driving conditions. In the experiment, the pedestrian attempts to cross the road in the presence of different driving styles and an external Human-Machine Interface (eHMI). By combining survey-based and behavioural analysis methodologies, we develop a quantitative approach to empirically assess the behavioural gap, as a mechanism to validate data obtained from real subjects interacting in a simulated VR-based environment. Results show that participants are more cautious and curious in VR, affecting their speed and decisions, and that VR interfaces significantly influence their actions.
The automated finite element analysis of complex CAD models using boundary-fitted meshes is rife with difficulties. Immersed finite element methods are intrinsically more robust but usually less accurate. In this work, we introduce an efficient, robust, high-order immersed finite element method for complex CAD models. Our approach relies on three adaptive structured grids: a geometry grid for representing the implicit geometry, a finite element grid for discretising physical fields and a quadrature grid for evaluating the finite element integrals. The geometry grid is a sparse VDB (Volumetric Dynamic B+ tree) grid that is highly refined close to physical domain boundaries. The finite element grid consists of a forest of octree grids distributed over several processors, and the quadrature grid in each finite element cell is an octree grid constructed in a bottom-up fashion. We discretise physical fields on the finite element grid using high-order Lagrange basis functions. The resolution of the quadrature grid ensures that finite element integrals are evaluated with sufficient accuracy and that any sub-grid geometric features, like small holes or corners, are resolved up to a desired resolution. The conceptual simplicity and modularity of our approach make it possible to reuse open-source libraries, i.e. openVDB and p4est for implementing the geometry and finite element grids, respectively, and BDDCML for iteratively solving the discrete systems of equations in parallel using domain decomposition. We demonstrate the efficiency and robustness of the proposed approach by solving the Poisson equation on domains given by complex CAD models and discretised with tens of millions of degrees of freedom.