The use of orthonormal polynomial bases has been found to be efficient in preventing ill-conditioning of the system matrix in the primal formulation of Virtual Element Methods (VEM) for high values of polynomial degree and in presence of badly-shaped polygons. However, we show that using the natural extension of a orthogonal polynomial basis built for the primal formulation is not sufficient to cure ill-conditioning in the mixed case. Thus, in the present work, we introduce an orthogonal vector-polynomial basis which is built ad hoc for being used in the mixed formulation of VEM and which leads to very high-quality solution in each tested case. Furthermore, a numerical experiment related to simulations in Discrete Fracture Networks (DFN), which are often characterised by very badly-shaped elements, is proposed to validate our procedures.
$l^q$-regularization has been demonstrated to be an attractive technique in machine learning and statistical modeling. It attempts to improve the generalization (prediction) capability of a machine (model) through appropriately shrinking its coefficients. The shape of a $l^q$ estimator differs in varying choices of the regularization order $q$. In particular, $l^1$ leads to the LASSO estimate, while $l^{2}$ corresponds to the smooth ridge regression. This makes the order $q$ a potential tuning parameter in applications. To facilitate the use of $l^{q}$-regularization, we intend to seek for a modeling strategy where an elaborative selection on $q$ is avoidable. In this spirit, we place our investigation within a general framework of $l^{q}$-regularized kernel learning under a sample dependent hypothesis space (SDHS). For a designated class of kernel functions, we show that all $l^{q}$ estimators for $0< q < \infty$ attain similar generalization error bounds. These estimated bounds are almost optimal in the sense that up to a logarithmic factor, the upper and lower bounds are asymptotically identical. This finding tentatively reveals that, in some modeling contexts, the choice of $q$ might not have a strong impact in terms of the generalization capability. From this perspective, $q$ can be arbitrarily specified, or specified merely by other no generalization criteria like smoothness, computational complexity, sparsity, etc..
The design of particle simulation methods for collisional plasma physics has always represented a challenge due to the unbounded total collisional cross section, which prevents a natural extension of the classical Direct Simulation Monte Carlo (DSMC) method devised for the Boltzmann equation. One way to overcome this problem is to consider the design of Monte Carlo algorithms that are robust in the so-called grazing collision limit. In the first part of this manuscript, we will focus on the construction of collision algorithms for the Landau-Fokker-Planck equation based on the grazing collision asymptotics and which avoids the use of iterative solvers. Subsequently, we discuss problems involving uncertainties and show how to develop a stochastic Galerkin projection of the particle dynamics which permits to recover spectral accuracy for smooth solutions in the random space. Several classical numerical tests are reported to validate the present approach.
In this work we propose tailored model order reduction for varying boundary optimal control problems governed by parametric partial differential equations. With varying boundary control, we mean that a specific parameter changes where the boundary control acts on the system. This peculiar formulation might benefit from model order reduction. Indeed, fast and reliable simulations of this model can be of utmost usefulness in many applied fields, such as geophysics and energy engineering. However, varying boundary control features very complicated and diversified parametric behaviour for the state and adjoint variables. The state solution, for example, changing the boundary control parameter, might feature transport phenomena. Moreover, the problem loses its affine structure. It is well known that classical model order reduction techniques fail in this setting, both in accuracy and in efficiency. Thus, we propose reduced approaches inspired by the ones used when dealing with wave-like phenomena. Indeed, we compare standard proper orthogonal decomposition with two tailored strategies: geometric recasting and local proper orthogonal decomposition. Geometric recasting solves the optimization system in a reference domain simplifying the problem at hand avoiding hyper-reduction, while local proper orthogonal decomposition builds local bases to increase the accuracy of the reduced solution in very general settings (where geometric recasting is unfeasible). We compare the various approaches on two different numerical experiments based on geometries of increasing complexity.
In approximation of functions based on point values, least-squares methods provide more stability than interpolation, at the expense of increasing the sampling budget. We show that near-optimal approximation error can nevertheless be achieved, in an expected $L^2$ sense, as soon as the sample size $m$ is larger than the dimension $n$ of the approximation space by a constant ratio. On the other hand, for $m=n$, we obtain an interpolation strategy with a stability factor of order $n$. The proposed sampling algorithms are greedy procedures based on arXiv:0808.0163 and arXiv:1508.03261, with polynomial computational complexity.
The realization of a standard Adaptive Finite Element Method (AFEM) preserves the mesh conformity by performing a completion step in the refinement loop: in addition to elements marked for refinement due to their contribution to the global error estimator, other elements are refined. In the new perspective opened by the introduction of Virtual Element Methods (VEM), elements with hanging nodes can be viewed as polygons with aligned edges, carrying virtual functions together with standard polynomial functions. The potential advantage is that all activated degrees of freedom are motivated by error reduction, not just by geometric reasons. This point of view is at the basis of the paper [L. Beirao da Veiga et al., Adaptive VEM: stabilization-free a posteriori error analysis and contraction property, SIAM Journal on Numerical Analysis, vol. 61, 2023], devoted to the convergence analysis of an adaptive VEM generated by the successive newest-vertex bisections of triangular elements without applying completion, in the lowest-order case (polynomial degree k=1). The purpose of this paper is to extend these results to the case of VEMs of order k>1 built on triangular meshes. The problem at hand is a variable-coefficient, second-order self-adjoint elliptic equation with Dirichlet boundary conditions; the data of the problem are assumed to be piecewise polynomials of degree k-1. By extending the concept of global index of a hanging node, under an admissibility assumption of the mesh, we derive a stabilization-free a posteriori error estimator. This is the sum of residual-type terms and certain virtual inconsistency terms (which vanish for k=1). We define an adaptive VEM of order k based on this estimator, and we prove its convergence by establishing a contraction result for a linear combination of (squared) energy norm of the error, residual estimator, and virtual inconsistency estimator.
We consider semigroup algorithmic problems in the Special Affine group $\mathsf{SA}(2, \mathbb{Z}) = \mathbb{Z}^2 \rtimes \mathsf{SL}(2, \mathbb{Z})$, which is the group of affine transformations of the lattice $\mathbb{Z}^2$ that preserve orientation. Our paper focuses on two decision problems introduced by Choffrut and Karhum\"{a}ki (2005): the Identity Problem (does a semigroup contain a neutral element?) and the Group Problem (is a semigroup a group?) for finitely generated sub-semigroups of $\mathsf{SA}(2, \mathbb{Z})$. We show that both problems are decidable and NP-complete. Since $\mathsf{SL}(2, \mathbb{Z}) \leq \mathsf{SA}(2, \mathbb{Z}) \leq \mathsf{SL}(3, \mathbb{Z})$, our result extends that of Bell, Hirvensalo and Potapov (SODA 2017) on the NP-completeness of both problems in $\mathsf{SL}(2, \mathbb{Z})$, and contributes a first step towards the open problems in $\mathsf{SL}(3, \mathbb{Z})$.
In this paper, two novel classes of implicit exponential Runge-Kutta (ERK) methods are studied for solving highly oscillatory systems. Firstly, we analyze the symplectic conditions for two kinds of exponential integrators and obtain the symplectic method. In order to effectively solve highly oscillatory problems, we try to design the highly accurate implicit ERK integrators. By comparing the Taylor series of numerical solution with exact solution, it can be verified that the order conditions of two new kinds of exponential methods are identical to classical Runge-Kutta (RK) methods, which implies that using the coefficients of RK methods, some highly accurate numerical methods are directly formulated. Furthermore, we also investigate the linear stability regions for these exponential methods. Finally, numerical results not only display the long time energy preservation of the symplectic method, but also illustrate the accuracy and efficiency of these formulated methods in comparison with standard ERK methods.
Among randomized numerical linear algebra strategies, so-called sketching procedures are emerging as effective reduction means to accelerate the computation of Krylov subspace methods for, e.g., the solution of linear systems, eigenvalue computations, and the approximation of matrix functions. While there is plenty of experimental evidence showing that sketched Krylov solvers may dramatically improve performance over standard Krylov methods, many features of these schemes are still unexplored. We derive new theoretical results that allow us to significantly improve our understanding of sketched Krylov methods, and to identify, among several possible equivalent formulations, the most suitable sketched approximations according to their numerical stability properties. These results are also employed to analyze the error of sketched Krylov methods in the approximation of the action of matrix functions, significantly contributing to the theory available in the current literature.
This study analyzes the possible relationship between personality traits, in terms of Big Five (extraversion, agreeableness, responsibility, emotional stability and openness to experience), and social interactions mediated by digital platforms in different socioeconomic and cultural contexts. We considered data from a questionnaire and the experience of using a chatbot, as a mean of requesting and offering help, with students from 4 universities: University of Trento (Italy), the National University of Mongolia, the School of Economics of London (United Kingdom) and the Universidad Cat\'olica Nuestra Se\~nora de la Asunci\'on (Paraguay). The main findings confirm that personality traits may influence social interactions and active participation in groups. Therefore, they should be taken into account to enrich the recommendation of matching algorithms between people who ask for help and people who could respond not only on the basis of their knowledge and skills.
Finite element methods and kinematically coupled schemes that decouple the fluid velocity and structure's displacement have been extensively studied for incompressible fluid-structure interaction (FSI) over the past decade. While these methods are known to be stable and easy to implement, optimal error analysis has remained challenging. Previous work has primarily relied on the classical elliptic projection technique, which is only suitable for parabolic problems and does not lead to optimal convergence of numerical solutions to the FSI problems in the standard $L^2$ norm. In this article, we propose a new kinematically coupled scheme for incompressible FSI thin-structure model and establish a new framework for the numerical analysis of FSI problems in terms of a newly introduced coupled non-stationary Ritz projection, which allows us to prove the optimal-order convergence of the proposed method in the $L^2$ norm. The methodology presented in this article is also applicable to numerous other FSI models and serves as a fundamental tool for advancing research in this field.