We introduce and analyze a Statically Condensed Iterated Penalty (SCIP) method for solving incompressible flow problems discretized with $p$th-order Scott-Vogelius elements. While the standard iterated penalty method is often the preferred algorithm for computing the discrete solution, it requires inverting a linear system with $\mathcal{O}(p^{d})$ unknowns at each iteration. The SCIP method reduces the size of this system to $\mathcal{O}(p^{d-1})$ unknowns while maintaining the geometric rate of convergence of the iterated penalty method. The application of SCIP to Kovasznay flow and Moffatt eddies shows good agreement with the theory.
Networked discrete dynamical systems are often used to model the spread of contagions and decision-making by agents in coordination games. Fixed points of such dynamical systems represent configurations to which the system converges. In the dissemination of undesirable contagions (such as rumors and misinformation), convergence to fixed points with a small number of affected nodes is a desirable goal. Motivated by such considerations, we formulate a novel optimization problem of finding a nontrivial fixed point of the system with the minimum number of affected nodes. We establish that, unless P = NP, there is no polynomial time algorithm for approximating a solution to this problem to within the factor n^1-\epsilon for any constant epsilon > 0. To cope with this computational intractability, we identify several special cases for which the problem can be solved efficiently. Further, we introduce an integer linear program to address the problem for networks of reasonable sizes. For solving the problem on larger networks, we propose a general heuristic framework along with greedy selection methods. Extensive experimental results on real-world networks demonstrate the effectiveness of the proposed heuristics.
Likelihood-free inference for simulator-based statistical models has developed rapidly from its infancy to a useful tool for practitioners. However, models with more than a handful of parameters still generally remain a challenge for the Approximate Bayesian Computation (ABC) based inference. To advance the possibilities for performing likelihood-free inference in higher dimensional parameter spaces, we introduce an extension of the popular Bayesian optimisation based approach to approximate discrepancy functions in a probabilistic manner which lends itself to an efficient exploration of the parameter space. Our approach achieves computational scalability for higher dimensional parameter spaces by using separate acquisition functions and discrepancies for each parameter. The efficient additive acquisition structure is combined with exponentiated loss -likelihood to provide a misspecification-robust characterisation of the marginal posterior distribution for all model parameters. The method successfully performs computationally efficient inference in a 100-dimensional space on canonical examples and compares favourably to existing modularised ABC methods. We further illustrate the potential of this approach by fitting a bacterial transmission dynamics model to a real data set, which provides biologically coherent results on strain competition in a 30-dimensional parameter space.
We present a numerical approximation of the solutions of the Euler equations with a gravitational source term. On the basis of a Suliciu type relaxation model with two relaxation speeds, we construct an approximate Riemann solver, which is used in a first order Godunov-type finite volume scheme. This scheme can preserve both stationary solutions and the low Mach limit to the corresponding incompressible equations. In addition, we prove that our scheme preserves the positivity of density and internal energy, that it is entropy satisfying and also guarantees not to give rise to numerical checkerboard modes in the incompressible limit. Later we give an extension to second order that preserves positivity, asymptotic-preserving and well-balancing properties. Finally, the theoretical properties are investigated in numerical experiments.
The $L_p$-discrepancy is a quantitative measure for the irregularity of distribution of an $N$-element point set in the $d$-dimensional unit cube, which is closely related to the worst-case error of quasi-Monte Carlo algorithms for numerical integration. Its inverse for dimension $d$ and error threshold $\varepsilon \in (0,1)$ is the number of points in $[0,1)^d$ that is required such that the minimal normalized $L_p$-discrepancy is less or equal $\varepsilon$. It is well known, that the inverse of $L_2$-discrepancy grows exponentially fast with the dimension $d$, i.e., we have the curse of dimensionality, whereas the inverse of $L_{\infty}$-discrepancy depends exactly linearly on $d$. The behavior of inverse of $L_p$-discrepancy for general $p \not\in \{2,\infty\}$ is an open problem since many years. In this paper we show that the $L_p$-discrepancy suffers from the curse of dimensionality for all $p$ which are of the form $p=2 \ell/(2 \ell -1)$ with $\ell \in \mathbb{N}$. This result follows from a more general result that we show for the worst-case error of numerical integration in an anchored Sobolev space with anchor 0 of once differentiable functions in each variable whose first derivative has finite $L_q$-norm, where $q$ is an even positive integer satisfying $1/p+1/q=1$.
The time-fractional porous medium equation is an important model of many hydrological, physical, and chemical flows. We study its self-similar solutions, which make up the profiles of many important experimentally measured situations. We prove that there is a unique solution to the general initial-boundary value problem in the one-dimensional setting. When supplemented with boundary conditions from the physical models, the problem exhibits a self-similar solution described with the use of the Erd\'elyi-Kober fractional operator. Using a backward shooting method, we show that there exists a unique solution to our problem. The shooting method is not only useful in deriving the theoretical results. We utilize it to devise an efficient numerical scheme to solve the governing problem along with two ways of discretizing the Erd\'elyi-Kober fractional derivative. Since the latter is a nonlocal operator, its numerical realization has to include some truncation. We find the correct truncation regime and prove several error estimates. Furthermore, the backward shooting method can be used to solve the main problem, and we provide a convergence proof. The main difficulty lies in the degeneracy of the diffusivity. We overcome it with some regularization. Our findings are supplemented with numerical simulations that verify the theoretical findings.
We present a novel solver technique for the anisotropic heat flux equation, aimed at the high level of anisotropy seen in magnetic confinement fusion plasmas. Such problems pose two major challenges: (i) discretization accuracy and (ii) efficient implicit linear solvers. We simultaneously address each of these challenges by constructing a new finite element discretization with excellent accuracy properties, tailored to a novel solver approach based on algebraic multigrid (AMG) methods designed for advective operators. We pose the problem in a mixed formulation, introducing the heat flux as an auxiliary variable and discretizing the temperature and auxiliary fields in a discontinuous Galerkin space. The resulting block matrix system is then reordered and solved using an approach in which two advection operators are inverted using AMG solvers based on approximate ideal restriction (AIR), which is particularly efficient for upwind discontinuous Galerkin discretizations of advection. To ensure that the advection operators are non-singular, in this paper we restrict ourselves to considering open (acyclic) magnetic field lines. We demonstrate the proposed discretization's superior accuracy over other discretizations of anisotropic heat flux, achieving error $1000\times$ smaller for anisotropy ratio of $10^9$, while also demonstrating fast convergence of the proposed iterative solver in highly anisotropic regimes where other diffusion-based AMG methods fail.
With advances in neural recording techniques, neuroscientists are now able to record the spiking activity of many hundreds of neurons simultaneously, and new statistical methods are needed to understand the structure of this large-scale neural population activity. Although previous work has tried to summarize neural activity within and between known populations by extracting low-dimensional latent factors, in many cases what determines a unique population may be unclear. Neurons differ in their anatomical location, but also, in their cell types and response properties. To identify populations directly related to neural activity, we develop a clustering method based on a mixture of dynamic Poisson factor analyzers (mixDPFA) model, with the number of clusters and dimension of latent factors for each cluster treated as unknown parameters. To analyze the proposed mixDPFA model, we propose a Markov chain Monte Carlo (MCMC) algorithm to efficiently sample its posterior distribution. Validating our proposed MCMC algorithm through simulations, we find that it can accurately recover the unknown parameters and the true clustering in the model, and is insensitive to the initial cluster assignments. We then apply the proposed mixDPFA model to multi-region experimental recordings, where we find that the proposed method can identify novel, reliable clusters of neurons based on their activity, and may, thus, be a useful tool for neural data analysis.
Mathematical models are essential for understanding and making predictions about systems arising in nature and engineering. Yet, mathematical models are a simplification of true phenomena, thus making predictions subject to uncertainty. Hence, the ability to quantify uncertainties is essential to any modelling framework, enabling the user to assess the importance of certain parameters on quantities of interest and have control over the quality of the model output by providing a rigorous understanding of uncertainty. Peridynamic models are a particular class of mathematical models that have proven to be remarkably accurate and robust for a large class of material failure problems. However, the high computational expense of peridynamic models remains a major limitation, hindering outer-loop applications that require a large number of simulations, for example, uncertainty quantification. This contribution provides a framework to make such computations feasible. By employing a Multilevel Monte Carlo (MLMC) framework, where the majority of simulations are performed using a coarse mesh, and performing relatively few simulations using a fine mesh, a significant reduction in computational cost can be realised, and statistics of structural failure can be estimated. The results show a speed-up factor of 16x over a standard Monte Carlo estimator, enabling the forward propagation of uncertain parameters in a computationally expensive peridynamic model. Furthermore, the multilevel method provides an estimate of both the discretisation error and sampling error, thus improving the confidence in numerical predictions. The performance of the approach is demonstrated through an examination of the statistical size effect in quasi-brittle materials.
We introduce a high-order spline geometric approach for the initial boundary value problem for Maxwell's equations. The method is geometric in the sense that it discretizes in structure preserving fashion the two de Rham sequences of differential forms involved in the formulation of the continuous system. Both the Ampere--Maxwell and the Faraday equations are required to hold strongly, while to make the system solvable two discrete Hodge star operators are used. By exploiting the properties of the chosen spline spaces and concepts from exterior calculus, a non-standard explicit in time formulation is introduced, based on the solution of linear systems with matrices presenting Kronecker product structure, rather than mass matrices as in the standard literature. These matrices arise from the application of the exterior (wedge) product in the discrete setting, and they present Kronecker product structure independently of the geometry of the domain or the material parameters. The resulting scheme preserves the desirable energy conservation properties of the known approaches. The computational advantages of the newly proposed scheme are studied both through a complexity analysis and through numerical experiments in three dimensions.
In this article we address two related issues in structural learning. First, which features make the sequence of events generated by a stochastic chain more difficult to predict. Second, how to model the procedures employed by different learners to identify the structure of sequences of events. Playing the role of a goalkeeper in a video-game, participants were told to predict step by step the successive directions -- left, center or right -- to which the penalty kicker would send the ball. The sequence of kicks was driven by a stochastic chain with memory of variable length. Results showed that at least three features play a role in the first issue: 1) the shape of the context tree summarizing the dependencies between present and past directions; 2) the entropy of the stochastic chain used to generate the sequences of events; 3) the existence or not of a deterministic periodic sequence underlying the sequences of events. Moreover, evidence suggests that best learners rely less on their own past choices to identify the structure of the sequences of events.