This article provides a reduced-order modelling framework for turbulent compressible flows discretized by the use of finite volume approaches. The basic idea behind this work is the construction of a reduced-order model capable of providing closely accurate solutions with respect to the high fidelity flow fields. Full-order solutions are often obtained through the use of segregated solvers (solution variables are solved one after another), employing slightly modified conservation laws so that they can be decoupled and then solved one at a time. Classical reduction architectures, on the contrary, rely on the Galerkin projection of a complete Navier-Stokes system to be projected all at once, causing a mild discrepancy with the high order solutions. This article relies on segregated reduced-order algorithms for the resolution of turbulent and compressible flows in the context of physical and geometrical parameters. At the full-order level turbulence is modeled using an eddy viscosity approach. Since there is a variety of different turbulence models for the approximation of this supplementary viscosity, one of the aims of this work is to provide a reduced-order model which is independent on this selection. This goal is reached by the application of hybrid methods where Navier-Stokes equations are projected in a standard way while the viscosity field is approximated by the use of data-driven interpolation methods or by the evaluation of a properly trained neural network. By exploiting the aforementioned expedients it is possible to predict accurate solutions with respect to the full-order problems characterized by high Reynolds numbers and elevated Mach numbers.
The broad class of multivariate unified skew-normal (SUN) distributions has been recently shown to possess important conjugacy properties. When used as priors for the vector of parameters in general probit, tobit, and multinomial probit models, these distributions yield posteriors that still belong to the SUN family. Although such a core result has led to important advancements in Bayesian inference and computation, its applicability beyond likelihoods associated with fully-observed, discretized, or censored realizations from multivariate Gaussian models remains yet unexplored. This article covers such an important gap by proving that the wider family of multivariate unified skew-elliptical (SUE) distributions, which extends SUNs to more general perturbations of elliptical densities, guarantees conjugacy for broader classes of models, beyond those relying on fully-observed, discretized or censored Gaussians. Such a result leverages the closure under linear combinations, conditioning and marginalization of SUE to prove that this family is conjugate to the likelihood induced by general multivariate regression models for fully-observed, censored or dichotomized realizations from skew-elliptical distributions. This advancement enlarges the set of models that enable conjugate Bayesian inference to general formulations arising from elliptical and skew-elliptical families, including the multivariate Student's t and skew-t, among others.
This paper presents a fitted space-time finite element method for solving a parabolic advection-diffusion problem with a nonstationary interface. The jumping diffusion coefficient gives rise to the discontinuity of the spatial gradient of solution across the interface. We use the Banach-Necas-Babuska theorem to show the well-posedness of the continuous variational problem. A fully discrete finite-element based scheme is analyzed using the Galerkin method and unstructured fitted meshes. An optimal error estimate is established in a discrete energy norm under appropriate globally low but locally high regularity conditions. Some numerical results corroborate our theoretical results.
This article advocates the use of conformal prediction (CP) methods for Gaussian process (GP) interpolation to enhance the calibration of prediction intervals. We begin by illustrating that using a GP model with parameters selected by maximum likelihood often results in predictions that are not optimally calibrated. CP methods can adjust the prediction intervals, leading to better uncertainty quantification while maintaining the accuracy of the underlying GP model. We compare different CP variants and introduce a novel variant based on an asymmetric score. Our numerical experiments demonstrate the effectiveness of CP methods in improving calibration without compromising accuracy. This work aims to facilitate the adoption of CP methods in the GP community.
This paper investigates the application of mini-batch gradient descent to semiflows. Given a loss function, we introduce a continuous version of mini-batch gradient descent by randomly selecting sub-loss functions over time, defining a piecewise flow. We prove that, under suitable assumptions on the gradient flow, the mini-batch descent flow trajectory closely approximates the original gradient flow trajectory on average. Additionally, we propose a randomized minimizing movement scheme that also approximates the gradient flow of the loss function. We illustrate the versatility of this approach across various problems, including constrained optimization, sparse inversion, and domain decomposition. Finally, we validate our results with several numerical examples.
Analyzing longitudinal data in health studies is challenging due to sparse and error-prone measurements, strong within-individual correlation, missing data and various trajectory shapes. While mixed-effect models (MM) effectively address these challenges, they remain parametric models and may incur computational costs. In contrast, Functional Principal Component Analysis (FPCA) is a non-parametric approach developed for regular and dense functional data that flexibly describes temporal trajectories at a potentially lower computational cost. This paper presents an empirical simulation study evaluating the behaviour of FPCA with sparse and error-prone repeated measures and its robustness under different missing data schemes in comparison with MM. The results show that FPCA is well-suited in the presence of missing at random data caused by dropout, except in scenarios involving most frequent and systematic dropout. Like MM, FPCA fails under missing not at random mechanism. The FPCA was applied to describe the trajectories of four cognitive functions before clinical dementia and contrast them with those of matched controls in a case-control study nested in a population-based aging cohort. The average cognitive declines of future dementia cases showed a sudden divergence from those of their matched controls with a sharp acceleration 5 to 2.5 years prior to diagnosis.
This article addresses the problem of automatically generating attack trees that soundly and clearly describe the ways the system can be attacked. Soundness means that the attacks displayed by the attack tree are indeed attacks in the system; clarity means that the tree is efficient in communicating the attack scenario. To pursue clarity, we introduce an attack-tree generation algorithm that minimises the tree size and the information length of its labels without sacrificing correctness. We achieve this by i) introducing a system model that allows to reason about attacks and goals in an efficient manner, and ii) by establishing a connection between the problem of factorising algebraic expressions and the problem of minimising the tree size. To the best of our knowledge, we introduce the first attack-tree generation framework that optimises the labelling and shape of the generated trees, while guaranteeing their soundness with respect to a system specification.
Finding the optimal design of experiments in the Bayesian setting typically requires estimation and optimization of the expected information gain functional. This functional consists of one outer and one inner integral, separated by the logarithm function applied to the inner integral. When the mathematical model of the experiment contains uncertainty about the parameters of interest and nuisance uncertainty, (i.e., uncertainty about parameters that affect the model but are not themselves of interest to the experimenter), two inner integrals must be estimated. Thus, the already considerable computational effort required to determine good approximations of the expected information gain is increased further. The Laplace approximation has been applied successfully in the context of experimental design in various ways, and we propose two novel estimators featuring the Laplace approximation to alleviate the computational burden of both inner integrals considerably. The first estimator applies Laplace's method followed by a Laplace approximation, introducing a bias. The second estimator uses two Laplace approximations as importance sampling measures for Monte Carlo approximations of the inner integrals. Both estimators use Monte Carlo approximation for the remaining outer integral estimation. We provide four numerical examples demonstrating the applicability and effectiveness of our proposed estimators.
This paper delves into the equivalence problem of Smith forms for multivariate polynomial matrices. Generally speaking, multivariate ($n \geq 2$) polynomial matrices and their Smith forms may not be equivalent. However, under certain specific condition, we derive the necessary and sufficient condition for their equivalence. Let $F\in K[x_1,\ldots,x_n]^{l\times m}$ be of rank $r$, $d_r(F)\in K[x_1]$ be the greatest common divisor of all the $r\times r$ minors of $F$, where $K$ is a field, $x_1,\ldots,x_n$ are variables and $1 \leq r \leq \min\{l,m\}$. Our key findings reveal the result: $F$ is equivalent to its Smith form if and only if all the $i\times i$ reduced minors of $F$ generate $K[x_1,\ldots,x_n]$ for $i=1,\ldots,r$.
Shape-restricted inferences have exhibited empirical success in various applications with survival data. However, certain works fall short in providing a rigorous theoretical justification and an easy-to-use variance estimator with theoretical guarantee. Motivated by Deng et al. (2023), this paper delves into an additive and shape-restricted partially linear Cox model for right-censored data, where each additive component satisfies a specific shape restriction, encompassing monotonic increasing/decreasing and convexity/concavity. We systematically investigate the consistencies and convergence rates of the shape-restricted maximum partial likelihood estimator (SMPLE) of all the underlying parameters. We further establish the aymptotic normality and semiparametric effiency of the SMPLE for the linear covariate shift. To estimate the asymptotic variance, we propose an innovative data-splitting variance estimation method that boasts exceptional versatility and broad applicability. Our simulation results and an analysis of the Rotterdam Breast Cancer dataset demonstrate that the SMPLE has comparable performance with the maximum likelihood estimator under the Cox model when the Cox model is correct, and outperforms the latter and Huang (1999)'s method when the Cox model is violated or the hazard is nonsmooth. Meanwhile, the proposed variance estimation method usually leads to reliable interval estimates based on the SMPLE and its competitors.
Evaluating heterogeneity of treatment effects (HTE) across subgroups is common in both randomized trials and observational studies. Although several statistical challenges of HTE analyses including low statistical power and multiple comparisons are widely acknowledged, issues arising for clustered data, including cluster randomized trials (CRTs), have received less attention. Notably, the potential for model misspecification is increased given the complex clustering structure (e.g., due to correlation among individuals within a subgroup and cluster), which could impact inference and type 1 errors. To illicit this issue, we conducted a simulation study to evaluate the performance of common analytic approaches for testing the presence of HTE for continuous, binary, and count outcomes: generalized linear mixed models (GLMM) and generalized estimating equations (GEE) including interaction terms between treatment group and subgroup. We found that standard GLMM analyses that assume a common correlation of participants within clusters can lead to severely elevated type 1 error rates of up to 47.2% compared to the 5% nominal level if the within-cluster correlation varies across subgroups. A flexible GLMM, which allows subgroup-specific within-cluster correlations, achieved the nominal type 1 error rate, as did GEE (though rates were slightly elevated even with as many as 50 clusters). Applying the methods to a real-world CRT using the count outcome utilization of healthcare, we found a large impact of the model specification on inference: the standard GLMM yielded highly significant interaction by sex (P=0.01), whereas the interaction was non-statistically significant under the flexible GLMM and GEE (P=0.64 and 0.93, respectively). We recommend that HTE analyses using GLMM account for within-subgroup correlation to avoid anti-conservative inference.