In this paper we introduce an abstract setting for the convergence analysis of the virtual element approximation of an acoustic vibration problem. We discuss the effect of the stabilization parameters and remark that in some cases it is possible to achieve optimal convergence without the need of any stabilization. This statement is rigorously proved for lowest order triangular element and supported by several numerical experiments.
The purpose of this paper is to look into how central notions in statistical learning theory, such as realisability, generalise under the assumption that train and test distribution are issued from the same credal set, i.e., a convex set of probability distributions. This can be considered as a first step towards a more general treatment of statistical learning under epistemic uncertainty.
In this paper, we carry out the numerical analysis of a nonsmooth quasilinear elliptic optimal control problem, where the coefficient in the divergence term of the corresponding state equation is not differentiable with respect to the state variable. Despite the lack of differentiability of the nonlinearity in the quasilinear elliptic equation, the corresponding control-to-state operator is of class $C^1$ but not of class $C^2$. Analogously, the discrete control-to-state operators associated with the approximated control problems are proven to be of class $C^1$ only. By using an explicit second-order sufficient optimality condition, we prove a priori error estimates for a variational approximation, a piecewise constant approximation, and a continuous piecewise linear approximation of the continuous optimal control problem. The numerical tests confirm these error estimates.
This paper delves into a nonparametric estimation approach for the interaction function within diffusion-type particle system models. We introduce two estimation methods based upon an empirical risk minimization. Our study encompasses an analysis of the stochastic and approximation errors associated with both procedures, along with an examination of certain minimax lower bounds. In particular, we show that there is a natural metric under which the corresponding minimax estimation error of the interaction function converges to zero with parametric rate. This result is rather suprising given complexity of the underlying estimation problem and rather large classes of interaction functions for which the above parametric rate holds.
In this paper we develop a new well-balanced discontinuous Galerkin (DG) finite element scheme with subcell finite volume (FV) limiter for the numerical solution of the Einstein--Euler equations of general relativity based on a first order hyperbolic reformulation of the Z4 formalism. The first order Z4 system, which is composed of 59 equations, is analyzed and proven to be strongly hyperbolic for a general metric. The well-balancing is achieved for arbitrary but a priori known equilibria by subtracting a discrete version of the equilibrium solution from the discretized time-dependent PDE system. Special care has also been taken in the design of the numerical viscosity so that the well-balancing property is achieved. As for the treatment of low density matter, e.g. when simulating massive compact objects like neutron stars surrounded by vacuum, we have introduced a new filter in the conversion from the conserved to the primitive variables, preventing superluminal velocities when the density drops below a certain threshold, and being potentially also very useful for the numerical investigation of highly rarefied relativistic astrophysical flows. Thanks to these improvements, all standard tests of numerical relativity are successfully reproduced, reaching three achievements: (i) we are able to obtain stable long term simulations of stationary black holes, including Kerr black holes with extreme spin, which after an initial perturbation return perfectly back to the equilibrium solution up to machine precision; (ii) a (standard) TOV star under perturbation is evolved in pure vacuum ($\rho$=$p$=0) up to t=1000 with no need to introduce any artificial atmosphere around the star; and, (iii) we solve the head on collision of two punctures black holes, that was previously considered un--tractable within the Z4 formalism.
Electromagnetic forming and perforations (EMFP) are complex and innovative high strain rate processes that involve electromagnetic-mechanical interactions for simultaneous metal forming and perforations. Instead of spending costly resources on repetitive experimental work, a properly designed numerical model can be effectively used for detailed analysis and characterization of the complex process. A coupled finite element (FE) model is considered for analyzing the multi-physics of the EMFP because of its robustness and improved accuracy. In this work, a detailed understanding of the process has been achieved by numerically simulating forming and perforations of Al6061-T6 tube for 12 holes and 36 holes with two different punches, i.e., pointed and concave punches using Ls-Dyna software. In order to shed light on EMFP physics, a comparison between experimental data and the formulated numerical simulation has been carried out to compare the average hole diameter and the number of perforated holes, for different types of punches and a range of discharge energies. The simulated results show acceptable agreement with experimental studies, with maximum deviations being less than or equal to 6%, which clearly illustrates the efficacy and capability of the developed coupled Multi-physics FE model.
In this work, we present a simple and unified analysis of the Johnson-Lindenstrauss (JL) lemma, a cornerstone in the field of dimensionality reduction critical for managing high-dimensional data. Our approach not only simplifies the understanding but also unifies various constructions under the JL framework, including spherical, binary-coin, sparse JL, Gaussian and sub-Gaussian models. This simplification and unification make significant strides in preserving the intrinsic geometry of data, essential across diverse applications from streaming algorithms to reinforcement learning. Notably, we deliver the first rigorous proof of the spherical construction's effectiveness and provide a general class of sub-Gaussian constructions within this simplified framework. At the heart of our contribution is an innovative extension of the Hanson-Wright inequality to high dimensions, complete with explicit constants, marking a substantial leap in the literature. By employing simple yet powerful probabilistic tools and analytical techniques, such as an enhanced diagonalization process, our analysis not only solidifies the JL lemma's theoretical foundation but also extends its practical reach, showcasing its adaptability and importance in contemporary computational algorithms.
We propose a novel methodology to solve a key eigenvalue optimization problem which arises in the contractivity analysis of neural ODEs. When looking at contractivity properties of a one layer weight-tied neural ODE $\dot{u}(t)=\sigma(Au(t)+b)$ (with $u,b \in {\mathbb R}^n$, $A$ is a given $n \times n$ matrix, $\sigma : {\mathbb R} \to {\mathbb R}^+$ denotes an activation function and for a vector $z \in {\mathbb R}^n$, $\sigma(z) \in {\mathbb R}^n$ has to be interpreted entry-wise), we are led to study the logarithmic norm of a set of products of type $D A$, where $D$ is a diagonal matrix such that ${\mathrm{diag}}(D) \in \sigma'({\mathbb R}^n)$. Specifically, given a real number $c$ (usually $c=0$), the problem consists in finding the largest positive interval $\chi\subseteq \mathbb [0,\infty)$ such that the logarithmic norm $\mu(DA) \le c$ for all diagonal matrices $D$ with $D_{ii}\in \chi$. We propose a two-level nested methodology: an inner level where, for a given $\chi$, we compute an optimizer $D^\star(\chi)$ by a gradient system approach, and an outer level where we tune $\chi$ so that the value $c$ is reached by $\mu(D^\star(\chi)A)$. We extend the proposed two-level approach to the general multilayer, and possibly time-dependent, case $\dot{u}(t) = \sigma( A_k(t) \ldots \sigma ( A_{1}(t) u(t) + b_{1}(t) ) \ldots + b_{k}(t) )$ and we propose several numerical examples to illustrate its behaviour, including its stabilizing performance on a one-layer neural ODE applied to the classification of the MNIST handwritten digits dataset.
In the context of interactive theorem provers based on a dependent type theory, automation tactics (dedicated decision procedures, call of automated solvers, ...) are often limited to goals which are exactly in some expected logical fragment. This very often prevents users from applying these tactics in other contexts, even similar ones. This paper discusses the design and the implementation of pre-processing operations for automating formal proofs in the Coq proof assistant. It presents the implementation of a wide variety of predictible, atomic goal transformations, which can be composed in various ways to target different backends. A gallery of examples illustrates how it helps to expand significantly the power of automation engines.
Accurate triangulation of the domain plays a pivotal role in computing the numerical approximation of the differential operators. A good triangulation is the one which aids in reducing discretization errors. In a standard collocation technique, the smooth curved domain is typically triangulated with a mesh by taking points on the boundary to approximate them by polygons. However, such an approach often leads to geometrical errors which directly affect the accuracy of the numerical approximation. To restrict such geometrical errors, \textit{isoparametric}, \textit{subparametric}, and \textit{iso-geometric} methods were introduced which allow the approximation of the curved surfaces (or curved line segments). In this paper, we present an efficient finite element method to approximate the solution to the elliptic boundary value problem (BVP), which governs the response of an elastic solid containing a v-notch and inclusions. The algebraically nonlinear constitutive equation along with the balance of linear momentum reduces to second-order quasi-linear elliptic partial differential equation. Our approach allows us to represent the complex curved boundaries by smooth \textit{one-of-its-kind} point transformation. The main idea is to obtain higher-order shape functions which enable us to accurately compute the entries in the finite element matrices and vectors. A Picard-type linearization is utilized to handle the nonlinearities in the governing differential equation. The numerical results for the test cases show considerable improvement in the accuracy.
In this paper, we present a multimodal and dynamical VAE (MDVAE) applied to unsupervised audio-visual speech representation learning. The latent space is structured to dissociate the latent dynamical factors that are shared between the modalities from those that are specific to each modality. A static latent variable is also introduced to encode the information that is constant over time within an audiovisual speech sequence. The model is trained in an unsupervised manner on an audiovisual emotional speech dataset, in two stages. In the first stage, a vector quantized VAE (VQ-VAE) is learned independently for each modality, without temporal modeling. The second stage consists in learning the MDVAE model on the intermediate representation of the VQ-VAEs before quantization. The disentanglement between static versus dynamical and modality-specific versus modality-common information occurs during this second training stage. Extensive experiments are conducted to investigate how audiovisual speech latent factors are encoded in the latent space of MDVAE. These experiments include manipulating audiovisual speech, audiovisual facial image denoising, and audiovisual speech emotion recognition. The results show that MDVAE effectively combines the audio and visual information in its latent space. They also show that the learned static representation of audiovisual speech can be used for emotion recognition with few labeled data, and with better accuracy compared with unimodal baselines and a state-of-the-art supervised model based on an audiovisual transformer architecture.