Membrane locking in finite element approximations of thin beams and shells has remained an unresolved topic despite four decades of research. In this article, we utilize Fourier analysis of the complete spectrum of natural vibrations and propose a criterion to identify and evaluate the severity of membrane locking. To demonstrate our approach, we utilize standard and mixed Galerkin formulations applied to a circular Euler-Bernoulli ring discretized using uniform, periodic B-splines. By analytically computing the discrete Fourier operators, we obtain an exact representation of the normalized error across the entire spectrum of eigenvalues. Our investigation addresses key questions related to membrane locking, including mode susceptibility, the influence of polynomial order, and the impact of shell/beam thickness and radius of curvature. Furthermore, we compare the effectiveness of mixed and standard Galerkin methods in mitigating locking. By providing insights into the parameters affecting locking and introducing a criterion to evaluate its severity, this research contributes to the development of improved numerical methods for thin beams and shells.
Data compression algorithms typically rely on identifying repeated sequences of symbols from the original data to provide a compact representation of the same information, while maintaining the ability to recover the original data from the compressed sequence. Using data transformations prior to the compression process has the potential to enhance the compression capabilities, being lossless as long as the transformation is invertible. Floating point data presents unique challenges to generate invertible transformations with high compression potential. This paper identifies key conditions for basic operations of floating point data that guarantee lossless transformations. Then, we show four methods that make use of these observations to deliver lossless compression of real datasets, where we improve compression rates up to 40 %.
The use of Air traffic management (ATM) simulators for planing and operations can be challenging due to their modelling complexity. This paper presents XALM (eXplainable Active Learning Metamodel), a three-step framework integrating active learning and SHAP (SHapley Additive exPlanations) values into simulation metamodels for supporting ATM decision-making. XALM efficiently uncovers hidden relationships among input and output variables in ATM simulators, those usually of interest in policy analysis. Our experiments show XALM's predictive performance comparable to the XGBoost metamodel with fewer simulations. Additionally, XALM exhibits superior explanatory capabilities compared to non-active learning metamodels. Using the `Mercury' (flight and passenger) ATM simulator, XALM is applied to a real-world scenario in Paris Charles de Gaulle airport, extending an arrival manager's range and scope by analysing six variables. This case study illustrates XALM's effectiveness in enhancing simulation interpretability and understanding variable interactions. By addressing computational challenges and improving explainability, XALM complements traditional simulation-based analyses. Lastly, we discuss two practical approaches for reducing the computational burden of the metamodelling further: we introduce a stopping criterion for active learning based on the inherent uncertainty of the metamodel, and we show how the simulations used for the metamodel can be reused across key performance indicators, thus decreasing the overall number of simulations needed.
Given samples from two non-negative random variables, we propose a new class of nonparametric tests for the null hypothesis that one random variable dominates the other with respect to second-order stochastic dominance. These tests are based on the Lorenz P-P plot (LPP), which is the composition between the inverse unscaled Lorenz curve of one distribution and the unscaled Lorenz curve of the other. The LPP exceeds the identity function if and only if the dominance condition is violated, providing a rather simple method to construct test statistics, given by functionals defined over the difference between the identity and the LPP. We determine a stochastic upper bound for such test statistics under the null hypothesis, and derive its limit distribution, to be approximated via bootstrap procedures. We also establish the asymptotic validity of the tests under relatively mild conditions, allowing for both dependent and independent samples. Finally, finite sample properties are investigated through simulation studies.
This note presents a refined local approximation for the logarithm of the ratio between the negative multinomial probability mass function and a multivariate normal density, both having the same mean-covariance structure. This approximation, which is derived using Stirling's formula and a meticulous treatment of Taylor expansions, yields an upper bound on the Hellinger distance between the jittered negative multinomial distribution and the corresponding multivariate normal distribution. Upper bounds on the Le Cam distance between negative multinomial and multivariate normal experiments ensue.
Penalized $M-$estimators for logistic regression models have been previously study for fixed dimension in order to obtain sparse statistical models and automatic variable selection. In this paper, we derive asymptotic results for penalized $M-$estimators when the dimension $p$ grows to infinity with the sample size $n$. Specifically, we obtain consistency and rates of convergence results, for some choices of the penalty function. Moreover, we prove that these estimators consistently select variables with probability tending to 1 and derive their asymptotic distribution.
We investigate error of the Euler scheme in the case when the right-hand side function of the underlying ODE satisfies nonstandard assumptions such as local one-sided Lipschitz condition and local H\"older continuity. Moreover, we assume two cases in regards to information availability: exact and noisy with respect to the right-hand side function. Optimality analysis of the Euler scheme is also provided. Finally, we present the results of some numerical experiments.
We consider the problem of identifying the acoustic impedance of a wall surface from noisy pressure measurements in a closed room using a Bayesian approach. The room acoustics is modeled by the interior Helmholtz equation with impedance boundary conditions. The aim is to compute moments of the acoustic impedance to estimate a suitable density function of the impedance coefficient. For the computation of moments we use ratio estimators and Monte-Carlo sampling. We consider two different experimental scenarios. In the first scenario, the noisy measurements correspond to a wall modeled by impedance boundary conditions. In this case, the Bayesian algorithm uses a model that is (up to the noise) consistent with the measurements and our algorithm is able to identify acoustic impedance with high accuracy. In the second scenario, the noisy measurements come from a coupled acoustic-structural problem, modeling a wall made of glass, whereas the Bayesian algorithm still uses a model with impedance boundary conditions. In this case, the parameter identification model is inconsistent with the measurements and therefore is not capable to represent them well. Nonetheless, for particular frequency bands the Bayesian algorithm identifies estimates with high likelihood. Outside these frequency bands the algorithm fails. We discuss the results of both examples and possible reasons for the failure of the latter case for particular frequency values.
We develop new matching estimators for estimating causal quantile exposure-response functions and quantile exposure effects with continuous treatments. We provide identification results for the parameters of interest and establish the asymptotic properties of the derived estimators. We introduce a two-step estimation procedure. In the first step, we construct a matched data set via generalized propensity score matching, adjusting for measured confounding. In the second step, we fit a kernel quantile regression to the matched set. We also derive a consistent estimator of the variance of the matching estimators. Using simulation studies, we compare the introduced approach with existing alternatives in various settings. We apply the proposed method to Medicare claims data for the period 2012-2014, and we estimate the causal effect of exposure to PM$_{2.5}$ on the length of hospital stay for each zip code of the contiguous United States.
We consider finite element approximations to the optimal constant for the Hardy inequality with exponent $p=2$ in bounded domains of dimension $n=1$ or $n\geq 3$. For finite element spaces of piecewise linear and continuous functions on a mesh of size $h$, we prove that the approximate Hardy constant, $S_h^n$, converges to the optimal Hardy constant $S^n$ no slower than $O(1/\vert \log h \vert)$. We also show that the convergence is no faster than $O(1/\vert \log h \vert^2)$ if $n=1$ or if $n\geq 3$, the domain is the unit ball, and the finite element discretization exploits the rotational symmetry of the problem. Our estimates are compared to exact values for $S_h^n$ obtained computationally.
We show that spectral data of the Koopman operator arising from an analytic expanding circle map $\tau$ can be effectively calculated using an EDMD-type algorithm combining a collocation method of order m with a Galerkin method of order n. The main result is that if $m \geq \delta n$, where $\delta$ is an explicitly given positive number quantifying by how much $\tau$ expands concentric annuli containing the unit circle, then the method converges and approximates the spectrum of the Koopman operator, taken to be acting on a space of analytic hyperfunctions, exponentially fast in n. Additionally, these results extend to more general expansive maps on suitable annuli containing the unit circle.