Flexural wave scattering plays a crucial role in optimizing and designing structures for various engineering applications. Mathematically, the flexural wave scattering problem on an infinite thin plate is described by a fourth-order plate-wave equation on an unbounded domain, making it challenging to solve directly using the regular linear finite element method (FEM). In this paper, we propose two numerical methods, the interior penalty FEM (IP-FEM) and the boundary penalty FEM (BP-FEM) with a transparent boundary condition (TBC), to study flexural wave scattering by an arbitrary-shaped cavity on an infinite thin plate. Both methods decompose the fourth-order plate-wave equation into the Helmholtz and modified Helmholtz equations with coupled conditions at the cavity boundary. A TBC is then constructed based on the analytical solutions of the Helmholtz and modified Helmholtz equations in the exterior domain, effectively truncating the unbounded domain into a bounded one. Using linear triangular elements, the IP-FEM and BP-FEM successfully suppress the oscillation of the bending moment of the solution at the cavity boundary, demonstrating superior stability and accuracy compared to the regular linear FEM when applied to this problem.
The problem of optimal recovering high-order mixed derivatives of bivariate functions with finite smoothness is studied. On the basis of the truncation method, an algorithm for numerical differentiation is constructed, which is order-optimal both in the sense of accuracy and in terms of the amount of involved Galerkin information.
We consider generalized operator eigenvalue problems in variational form with random perturbations in the bilinear forms. This setting is motivated by variational forms of partial differential equations with random input data. The considered eigenpairs can be of higher but finite multiplicity. We investigate stochastic quantities of interest of the eigenpairs and discuss why, for multiplicity greater than 1, only the stochastic properties of the eigenspaces are meaningful, but not the ones of individual eigenpairs. To that end, we characterize the Fr\'echet derivatives of the eigenpairs with respect to the perturbation and provide a new linear characterization for eigenpairs of higher multiplicity. As a side result, we prove local analyticity of the eigenspaces. Based on the Fr\'echet derivatives of the eigenpairs we discuss a meaningful Monte Carlo sampling strategy for multiple eigenvalues and develop an uncertainty quantification perturbation approach. Numerical examples are presented to illustrate the theoretical results.
The investigation of mixture models is a key to understand and visualize the distribution of multivariate data. Most mixture models approaches are based on likelihoods, and are not adapted to distribution with finite support or without a well-defined density function. This study proposes the Augmented Quantization method, which is a reformulation of the classical quantization problem but which uses the p-Wasserstein distance. This metric can be computed in very general distribution spaces, in particular with varying supports. The clustering interpretation of quantization is revisited in a more general framework. The performance of Augmented Quantization is first demonstrated through analytical toy problems. Subsequently, it is applied to a practical case study involving river flooding, wherein mixtures of Dirac and Uniform distributions are built in the input space, enabling the identification of the most influential variables.
This paper addresses the problem of end-effector formation control for a mixed group of two-link manipulators moving in a horizontal plane that comprises of fully-actuated manipulators and underactuated manipulators with only the second joint being actuated (referred to as the passive-active (PA) manipulators). The problem is solved by extending the distributed end-effector formation controller for the fully-actuated manipulator to the PA manipulator moving in a horizontal plane by using its integrability. This paper presents stability analysis of the closed-loop systems under a given necessary condition, and we prove that the manipulators' end-effector converge to the desired formation shape. The proposed method is validated by simulations.
The convergence analysis for least-squares finite element methods led to various adaptive mesh-refinement strategies: Collective marking algorithms driven by the built-in a posteriori error estimator or an alternative explicit residual-based error estimator as well as a separate marking strategy based on the alternative error estimator and an optimal data approximation algorithm. This paper reviews and discusses available convergence results. In addition, all three strategies are investigated empirically for a set of benchmarks examples of second-order elliptic partial differential equations in two spatial dimensions. Particular interest is on the choice of the marking and refinement parameters and the approximation of the given data. The numerical experiments are reproducible using the author's software package octAFEM available on the platform Code Ocean.
This paper introduces novel bulk-surface splitting schemes of first and second order for the wave equation with kinetic and acoustic boundary conditions of semi-linear type. For kinetic boundary conditions, we propose a reinterpretation of the system equations as a coupled system. This means that the bulk and surface dynamics are modeled separately and connected through a coupling constraint. This allows the implementation of splitting schemes, which show first-order convergence in numerical experiments. On the other hand, acoustic boundary conditions naturally separate bulk and surface dynamics. Here, Lie and Strang splitting schemes reach first- and second-order convergence, respectively, as we reveal numerically.
Although the applications of Non-Homogeneous Poisson Processes to model and study the threshold overshoots of interest in different time series of measurements have proven to provide good results, they needed to be complemented with an efficient and automatic diagnostic technique to establish the location of the change-points, which, when taken into account, make the estimated model fit poorly in regards of the information contained in the real model. For this reason, we propose a new method to solve the segmentation uncertainty of the time series of measurements, where the emission distribution of exceedances of a specific threshold is the focus of investigation. One of the great contributions of the present algorithm is that all the days that overflowed are candidates to be a change-point, so all the possible configurations of overflow days are the possible chromosomes, which will unite to have offspring. Under the heuristics of a genetic algorithm, the solution to the problem of finding such change points will be guaranteed to be non-local and the best possible one, reducing wasted machine time evaluating the least likely chromosomes to be a solution to the problem. The analytical evaluation technique will be by means of the Minimum Description Length (\textit{MDL}) as the objective function, which is the joint posterior distribution function of the parameters of each regime and the change points that determines them and which account as well for the influence of the presence of said times.
In this paper, new unfitted mixed finite elements are presented for elliptic interface problems with jump coefficients. Our model is based on a fictitious domain formulation with distributed Lagrange multiplier. The relevance of our investigations is better seen when applied to the framework of fluid structure interaction problems. Two finite elements schemes with piecewise constant Lagrange multiplier are proposed and their stability is proved theoretically. Numerical results compare the performance of those elements, confirming the theoretical proofs and verifying that the schemes converge with optimal rate.
Epidemiological models must be calibrated to ground truth for downstream tasks such as producing forward projections or running what-if scenarios. The meaning of calibration changes in case of a stochastic model since output from such a model is generally described via an ensemble or a distribution. Each member of the ensemble is usually mapped to a random number seed (explicitly or implicitly). With the goal of finding not only the input parameter settings but also the random seeds that are consistent with the ground truth, we propose a class of Gaussian process (GP) surrogates along with an optimization strategy based on Thompson sampling. This Trajectory Oriented Optimization (TOO) approach produces actual trajectories close to the empirical observations instead of a set of parameter settings where only the mean simulation behavior matches with the ground truth.
We present a novel approach for solving the shallow water equations using a discontinuous Galerkin spectral element method. The method we propose has three main features. First, it enjoys a discrete well-balanced property, in a spirit similar to the one of e.g. [20]. As in the reference, our scheme does not require any a-priori knowledge of the steady equilibrium, moreover it does not involve the explicit solution of any local auxiliary problem to approximate such equilibrium. The scheme is also arbitrarily high order, and verifies a continuous in time cell entropy equality. The latter becomes an inequality as soon as additional dissipation is added to the method. The method is constructed starting from a global flux approach in which an additional flux term is constructed as the primitive of the source. We show that, in the context of nodal spectral finite elements, this can be translated into a simple modification of the integral of the source term. We prove that, when using Gauss-Lobatto nodal finite elements this modified integration is equivalent at steady state to a high order Gauss collocation method applied to an ODE for the flux. This method is superconvergent at the collocation points, thus providing a discrete well-balanced property very similar in spirit to the one proposed in [20], albeit not needing the explicit computation of a local approximation of the steady state. To control the entropy production, we introduce artificial viscosity corrections at the cell level and incorporate them into the scheme. We provide theoretical and numerical characterizations of the accuracy and equilibrium preservation of these corrections. Through extensive numerical benchmarking, we validate our theoretical predictions, with considerable improvements in accuracy for steady states, as well as enhanced robustness for more complex scenarios