Nishikawa (2007) proposed to reformulate the classical Poisson equation as a steady state problem for a linear hyperbolic system. This results in optimal error estimates for both the solution of the elliptic equation and its gradient. However, it prevents the application of well-known solvers for elliptic problems. We show connections to a discontinuous Galerkin (DG) method analyzed by Cockburn, Guzm\'an, and Wang (2009) that is very difficult to implement in general. Next, we demonstrate how this method can be implemented efficiently using summation by parts (SBP) operators, in particular in the context of SBP DG methods such as the DG spectral element method (DGSEM). The resulting scheme combines nice properties of both the hyperbolic and the elliptic point of view, in particular a high order of convergence of the gradients, which is one order higher than what one would usually expect from DG methods for elliptic problems.
We consider the coupled system of the Landau--Lifshitz--Gilbert equation and the conservation of linear momentum law to describe magnetic processes in ferromagnetic materials including magnetoelastic effects in the small-strain regime. For this nonlinear system of time-dependent partial differential equations, we present a decoupled integrator based on first-order finite elements in space and an implicit one-step method in time. We prove unconditional convergence of the sequence of discrete approximations towards a weak solution of the system as the mesh size and the time-step size go to zero. Compared to previous numerical works on this problem, for our method, we prove a discrete energy law that mimics that of the continuous problem and, passing to the limit, yields an energy inequality satisfied by weak solutions. Moreover, our method does not employ a nodal projection to impose the unit length constraint on the discrete magnetisation, so that the stability of the method does not require weakly acute meshes. Furthermore, our integrator and its analysis hold for a more general setting, including body forces and traction, as well as a more general representation of the magnetostrain. Numerical experiments underpin the theory and showcase the applicability of the scheme for the simulation of the dynamical processes involving magnetoelastic materials at submicrometer length scales.
This paper develops and benchmarks an immersed peridynamics method to simulate the deformation, damage, and failure of hyperelastic materials within a fluid-structure interaction framework. The immersed peridynamics method describes an incompressible structure immersed in a viscous incompressible fluid. It expresses the momentum equation and incompressibility constraint in Eulerian form, and it describes the structural motion and resultant forces in Lagrangian form. Coupling between Eulerian and Lagrangian variables is achieved by integral transforms with Dirac delta function kernels, as in standard immersed boundary methods. The major difference between our approach and conventional immersed boundary methods is that we use peridynamics, instead of classical continuum mechanics, to determine the structural forces. We focus on non-ordinary state-based peridynamic material descriptions that allow us to use a constitutive correspondence framework that can leverage well characterized nonlinear constitutive models of soft materials. The convergence and accuracy of our approach are compared to both conventional and immersed finite element methods using widely used benchmark problems of nonlinear incompressible elasticity. We demonstrate that the immersed peridynamics method yields comparable accuracy with similar numbers of structural degrees of freedom for several choices of the size of the peridynamic horizon. We also demonstrate that the method can generate grid-converged simulations of fluid-driven material damage growth, crack formation and propagation, and rupture under large deformations.
At STOC 2002, Eiter, Gottlob, and Makino presented a technique called ordered generation that yields an $n^{O(d)}$-delay algorithm listing all minimal transversals of an $n$-vertex hypergraph of degeneracy $d$. Recently at IWOCA 2019, Conte, Kant\'e, Marino, and Uno asked whether this XP-delay algorithm parameterized by $d$ could be made FPT-delay parameterized by $d$ and the maximum degree $\Delta$, i.e., an algorithm with delay $f(d,\Delta)\cdot n^{O(1)}$ for some computable function $f$. Moreover, as a first step toward answering that question, they note that the same delay is open for the intimately related problem of listing all minimal dominating sets in graphs. In this paper, we answer the latter question in the affirmative.
A standard approach to solve ordinary differential equations, when they describe dynamical systems, is to adopt a Runge-Kutta or related scheme. Such schemes, however, are not applicable to the large class of equations which do not constitute dynamical systems. In several physical systems, we encounter integro-differential equations with memory terms where the time derivative of a state variable at a given time depends on all past states of the system. Secondly, there are equations whose solutions do not have well-defined Taylor series expansion. The Maxey-Riley-Gatignol equation, which describes the dynamics of an inertial particle in nonuniform and unsteady flow, displays both challenges. We use it as a test bed to address the questions we raise, but our method may be applied to all equations of this class. We show that the Maxey-Riley-Gatignol equation can be embedded into an extended Markovian system which is constructed by introducing a new dynamical co-evolving state variable that encodes memory of past states. We develop a Runge-Kutta algorithm for the resultant Markovian system. The form of the kernels involved in deriving the Runge-Kutta scheme necessitates the use of an expansion in powers of $t^{1/2}$. Our approach naturally inherits the benefits of standard time-integrators, namely a constant memory storage cost, a linear growth of operational effort with simulation time, and the ability to restart a simulation with the final state as the new initial condition.
Hawkes processes are often applied to model dependence and interaction phenomena in multivariate event data sets, such as neuronal spike trains, social interactions, and financial transactions. In the nonparametric setting, learning the temporal dependence structure of Hawkes processes is generally a computationally expensive task, all the more with Bayesian estimation methods. In particular, for generalised nonlinear Hawkes processes, Monte-Carlo Markov Chain methods applied to compute the doubly intractable posterior distribution are not scalable to high-dimensional processes in practice. Recently, efficient algorithms targeting a mean-field variational approximation of the posterior distribution have been proposed. In this work, we first unify existing variational Bayes approaches under a general nonparametric inference framework, and analyse the asymptotic properties of these methods under easily verifiable conditions on the prior, the variational class, and the nonlinear model. Secondly, we propose a novel sparsity-inducing procedure, and derive an adaptive mean-field variational algorithm for the popular sigmoid Hawkes processes. Our algorithm is parallelisable and therefore computationally efficient in high-dimensional setting. Through an extensive set of numerical simulations, we also demonstrate that our procedure is able to adapt to the dimensionality of the parameter of the Hawkes process, and is partially robust to some type of model mis-specification.
This study focuses on the presence of (multi)fractal structures in confined hadronic matter through the momentum distributions of mesons produced in proton-proton collisions between 23 GeV and 63 GeV. The analysis demonstrates that the $q$-exponential behaviour of the particle momentum distributions is consistent with fractal characteristics, exhibiting fractal structures in confined hadronic matter with features similar to those observed in the deconfined quark-gluon plasma (QGP) regime. Furthermore, the systematic analysis of meson production in hadronic collisions at energies below 1 TeV suggests that specific fractal parameters are universal, independently of confinement or deconfinement, while others may be influenced by the quark content of the produced meson. These results pave the way for further research exploring the implications of fractal structures on various physical distributions and offer insights into the nature of the phase transition between confined and deconfined regimes.
This paper presents the error analysis of numerical methods on graded meshes for stochastic Volterra equations with weakly singular kernels. We first prove a novel regularity estimate for the exact solution via analyzing the associated convolution structure. This reveals that the exact solution exhibits an initial singularity in the sense that its H\"older continuous exponent on any neighborhood of $t=0$ is lower than that on every compact subset of $(0,T]$. Motivated by the initial singularity, we then construct the Euler--Maruyama method, fast Euler--Maruyama method, and Milstein method based on graded meshes. By establishing their pointwise-in-time error estimates, we give the grading exponents of meshes to attain the optimal uniform-in-time convergence orders, where the convergence orders improve those of the uniform mesh case. Numerical experiments are finally reported to confirm the sharpness of theoretical findings.
Estimating the probability of the binomial distribution is a basic problem, which appears in almost all introductory statistics courses and is performed frequently in various studies. In some cases, the parameter of interest is a difference between two probabilities, and the current work studies the construction of confidence intervals for this parameter when the sample size is small. Our goal is to find the shortest confidence intervals under the constraint of coverage probability being larger than a predetermined level. For the two-sample case, there is no known algorithm that achieves this goal, but different heuristics procedures have been suggested, and the present work aims at finding optimal confidence intervals. In the one-sample case, there is a known algorithm that finds optimal confidence intervals presented by Blyth and Still (1983). It is based on solving small and local optimization problems and then using an inversion step to find the global optimum solution. We show that this approach fails in the two-sample case and therefore, in order to find optimal confidence intervals, one needs to solve a global optimization problem, rather than small and local ones, which is computationally much harder. We present and discuss the suitable global optimization problem. Using the Gurobi package we find near-optimal solutions when the sample sizes are smaller than 15, and we compare these solutions to some existing methods, both approximate and exact. We find that the improvement in terms of lengths with respect to the best competitor varies between 1.5\% and 5\% for different parameters of the problem. Therefore, we recommend the use of the new confidence intervals when both sample sizes are smaller than 15. Tables of the confidence intervals are given in the Excel file in this link.
Learning classification tasks of (2^nx2^n) inputs typically consist of \le n (2x2) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly enhance accuracies. In particular, average accuracies of the advanced-VGG with m layers (A-VGGm) architectures are 0.936, 0.940, 0.954, 0.955, and 0.955 for m=6, 8, 14, 13, and 16, respectively. The results indicate A-VGG8s' accuracy is superior to VGG16s', and that the accuracies of A-VGG13 and A-VGG16 are equal, and comparable to that of Wide-ResNet16. In addition, replacing the three fully connected (FC) layers with one FC layer, A-VGG6 and A-VGG14, or with several linear activation FC layers, yielded similar accuracies. These significantly enhanced accuracies stem from training the most influential input-output routes, in comparison to the inferior routes selected following multiple MP decisions along the deep architecture. In addition, accuracies are sensitive to the order of the non-commutative MP and average pooling operators adjacent to the output layer, varying the number and location of training routes. The results call for the reexamination of previously proposed deep architectures and their accuracies by utilizing the proposed pooling strategy adjacent to the output layer.
We present a multigrid algorithm to solve efficiently the large saddle-point systems of equations that typically arise in PDE-constrained optimization under uncertainty. The algorithm is based on a collective smoother that at each iteration sweeps over the nodes of the computational mesh, and solves a reduced saddle-point system whose size depends on the number $N$ of samples used to discretized the probability space. We show that this reduced system can be solved with optimal $O(N)$ complexity. We test the multigrid method on three problems: a linear-quadratic problem for which the multigrid method is used to solve directly the linear optimality system; a nonsmooth problem with box constraints and $L^1$-norm penalization on the control, in which the multigrid scheme is used within a semismooth Newton iteration; a risk-adverse problem with the smoothed CVaR risk measure where the multigrid method is called within a preconditioned Newton iteration. In all cases, the multigrid algorithm exhibits very good performances and robustness with respect to all parameters of interest.