In its additive version, Bohr-Mollerup's remarkable theorem states that the unique (up to an additive constant) convex solution $f(x)$ to the equation $\Delta f(x)=\ln x$ on the open half-line $(0,\infty)$ is the log-gamma function $f(x)=\ln\Gamma(x)$, where $\Delta$ denotes the classical difference operator and $\Gamma(x)$ denotes the Euler gamma function. In a recently published open access book, the authors provided and illustrated a far-reaching generalization of Bohr-Mollerup's theorem by considering the functional equation $\Delta f(x)=g(x)$, where $g$ can be chosen from a wide and rich class of functions that have convexity or concavity properties of any order. They also showed that the solutions $f(x)$ arising from this generalization satisfy counterparts of many properties of the log-gamma function (or equivalently, the gamma function), including analogues of Bohr-Mollerup's theorem itself, Burnside's formula, Euler's infinite product, Euler's reflection formula, Gauss' limit, Gauss' multiplication formula, Gautschi's inequality, Legendre's duplication formula, Raabe's formula, Stirling's formula, Wallis's product formula, Weierstrass' infinite product, and Wendel's inequality for the gamma function. In this paper, we review the main results of this new and intriguing theory and provide an illustrative application.
We evaluate the performance of novel numerical methods for solving one-dimensional nonlinear fractional dispersive and dissipative evolution equations. The methods are based on affine combinations of time-splitting integrators and pseudo-spectral discretizations using Hermite and Fourier expansions. We show the effectiveness of the proposed methods by numerically computing the dynamics of soliton solutions of the the standard and fractional variants of the nonlinear Schr{\"o}dinger equation (NLSE) and the complex Ginzburg-Landau equation (CGLE), and by comparing the results with those obtained by standard splitting integrators. An exhaustive numerical investigation shows that the new technique is competitive when compared to traditional composition-splitting schemes for the case of Hamiltonian problems both in terms accuracy and computational cost. Moreover, it is applicable straightforwardly to irreversible models, outperforming high-order symplectic integrators which could become unstable due to their need of negative time steps. Finally, we discuss potential improvements of the numerical methods aimed to increase their efficiency, and possible applications to the investigation of dissipative solitons that arise in nonlinear optical systems of contemporary interest. Overall, the method offers a promising alternative for solving a wide range of evolutionary partial differential equations.
We present here a new splitting method to solve Lyapunov equations in a Kronecker product form. Although this resulting matrix is of order $n^2$, each iteration demands two operations with the matrix $A$: a multiplication of the form $(A-\sigma I) \tilde{B}$ and a inversion of the form $(A-\sigma I)^{-1}\tilde{B}$. We see that for some choice of a parameter the iteration matrix is such that all their eigenvalues are in absolute value less than 1. Moreover we present a theorem that enables us to get a good starting vector for the method.
Recently established, directed dependence measures for pairs $(X,Y)$ of random variables build upon the natural idea of comparing the conditional distributions of $Y$ given $X=x$ with the marginal distribution of $Y$. They assign pairs $(X,Y)$ values in $[0,1]$, the value is $0$ if and only if $X,Y$ are independent, and it is $1$ exclusively for $Y$ being a function of $X$. Here we show that comparing randomly drawn conditional distributions with each other instead or, equivalently, analyzing how sensitive the conditional distribution of $Y$ given $X=x$ is on $x$, opens the door to constructing novel families of dependence measures $\Lambda_\varphi$ induced by general convex functions $\varphi: \mathbb{R} \rightarrow \mathbb{R}$, containing, e.g., Chatterjee's coefficient of correlation as special case. After establishing additional useful properties of $\Lambda_\varphi$ we focus on continuous $(X,Y)$, translate $\Lambda_\varphi$ to the copula setting, consider the $L^p$-version and establish an estimator which is strongly consistent in full generality. A real data example and a simulation study illustrate the chosen approach and the performance of the estimator. Complementing the afore-mentioned results, we show how a slight modification of the construction underlying $\Lambda_\varphi$ can be used to define new measures of explainability generalizing the fraction of explained variance.
We consider a one-dimensional singularly perturbed 4th order problem with the additional feature of a shift term. An expansion into a smooth term, boundary layers and an inner layer yields a formal solution decomposition, and together with a stability result we have estimates for the subsequent numerical analysis. With classical layer adapted meshes we present a numerical method, that achieves supercloseness and optimal convergence orders in the associated energy norm. We also consider coarser meshes in view of the weak layers. Some numerical examples conclude the paper and support the theory.
This paper presents a novel spatial discretisation method for the reliable and efficient simulation of Bose-Einstein condensates modelled by the Gross-Pitaevskii equation and the corresponding nonlinear eigenvector problem. The method combines the high-accuracy properties of numerical homogenisation methods with a novel super-localisation approach for the calculation of the basis functions. A rigorous numerical analysis demonstrates superconvergence of the approach compared to classical polynomial and multiscale finite element methods, even in low regularity regimes. Numerical tests reveal the method's competitiveness with spectral methods, particularly in capturing critical physical effects in extreme conditions, such as vortex lattice formation in fast-rotating potential traps. The method's potential is further highlighted through a dynamic simulation of a phase transition from Mott insulator to Bose-Einstein condensate, emphasising its capability for reliable exploration of physical phenomena.
Many mechanisms behind the evolution of cooperation, such as reciprocity, indirect reciprocity, and altruistic punishment, require group knowledge of individual actions. But what keeps people cooperating when no one is looking? Conformist norm internalization, the tendency to abide by the behavior of the majority of the group, even when it is individually harmful, could be the answer. In this paper, we analyze a world where (1) there is group selection and punishment by indirect reciprocity but (2) many actions (half) go unobserved, and therefore unpunished. Can norm internalization fill this "observation gap" and lead to high levels of cooperation, even when agents may in principle cooperate only when likely to be caught and punished? Specifically, we seek to understand whether adding norm internalization to the strategy space in a public goods game can lead to higher levels of cooperation when both norm internalization and cooperation start out rare. We found the answer to be positive, but, interestingly, not because norm internalizers end up making up a substantial fraction of the population, nor because they cooperate much more than other agent types. Instead, norm internalizers, by polarizing, catalyzing, and stabilizing cooperation, can increase levels of cooperation of other agent types, while only making up a minority of the population themselves.
We present a novel stabilized isogeometric formulation for the Stokes problem, where the geometry of interest is obtained via overlapping NURBS (non-uniform rational B-spline) patches, i.e., one patch on top of another in an arbitrary but predefined hierarchical order. All the visible regions constitute the computational domain, whereas independent patches are coupled through visible interfaces using Nitsche's formulation. Such a geometric representation inevitably involves trimming, which may yield trimmed elements of extremely small measures (referred to as bad elements) and thus lead to the instability issue. Motivated by the minimal stabilization method that rigorously guarantees stability for trimmed geometries [1], in this work we generalize it to the Stokes problem on overlapping patches. Central to our method is the distinct treatments for the pressure and velocity spaces: Stabilization for velocity is carried out for the flux terms on interfaces, whereas pressure is stabilized in all the bad elements. We provide a priori error estimates with a comprehensive theoretical study. Through a suite of numerical tests, we first show that optimal convergence rates are achieved, which consistently agrees with our theoretical findings. Second, we show that the accuracy of pressure is significantly improved by several orders using the proposed stabilization method, compared to the results without stabilization. Finally, we also demonstrate the flexibility and efficiency of the proposed method in capturing local features in the solution field.
In this paper, we consider an energy-conserving continuous Galerkin discretization of the Gross-Pitaevskii equation with a magnetic trapping potential and a stirring potential for angular momentum rotation. The discretization is based on finite elements in space and time and allows for arbitrary polynomial orders. It was first analyzed in [O. Karakashian, C. Makridakis; SIAM J. Numer. Anal. 36(6):1779-1807, 1999] in the absence of potential terms and corresponding a priori error estimates were derived in 2D. In this work we revisit the approach in the generalized setting of the Gross-Pitaevskii equation with rotation and we prove uniform $L^\infty$-bounds for the corresponding numerical approximations in 2D and 3D without coupling conditions between the spatial mesh size and the time step size. With this result at hand, we are in particular able to extend the previous error estimates to the 3D setting while avoiding artificial CFL conditions.
An unconventional approach is applied to solve the one-dimensional Burgers' equation. It is based on spline polynomial interpolations and Hopf-Cole transformation. Taylor expansion is used to approximate the exponential term in the transformation, then the analytical solution of the simplified equation is discretized to form a numerical scheme, involving various special functions. The derived scheme is explicit and adaptable for parallel computing. However, some types of boundary condition cannot be specified straightforwardly. Three test cases were employed to examine its accuracy, stability, and parallel scalability. In the aspect of accuracy, the schemes employed cubic and quintic spline interpolation performs equally well, managing to reduce the $\ell_{1}$, $\ell_{2}$ and $\ell_{\infty}$ error norms down to the order of $10^{-4}$. Due to the transformation, their stability condition $\nu \Delta t/\Delta x^2 > 0.02$ includes the viscosity/diffusion coefficient $\nu$. From the condition, the schemes can run at a large time step size $\Delta t$ even when grid spacing $\Delta x$ is small. These characteristics suggest that the method is more suitable for operational use than for research purposes.
A sequential pattern with negation, or negative sequential pattern, takes the form of a sequential pattern for which the negation symbol may be used in front of some of the pattern's itemsets. Intuitively, such a pattern occurs in a sequence if negated itemsets are absent in the sequence. Recent work has shown that different semantics can be attributed to these pattern forms, and that state-of-the-art algorithms do not extract the same sets of patterns. This raises the important question of the interpretability of sequential pattern with negation. In this study, our focus is on exploring how potential users perceive negation in sequential patterns. Our aim is to determine whether specific semantics are more "intuitive" than others and whether these align with the semantics employed by one or more state-of-the-art algorithms. To achieve this, we designed a questionnaire to reveal the semantics' intuition of each user. This article presents both the design of the questionnaire and an in-depth analysis of the 124 responses obtained. The outcomes indicate that two of the semantics are predominantly intuitive; however, neither of them aligns with the semantics of the primary state-of-the-art algorithms. As a result, we provide recommendations to account for this disparity in the conclusions drawn.