Numerical homogenization of multiscale equations typically requires taking an average of the solution to a microscale problem. Both the boundary conditions and domain size of the microscale problem play an important role in the accuracy of the homogenization procedure. In particular, imposing naive boundary conditions leads to a $\mathcal{O}(\epsilon/\eta)$ error in the computation, where $\epsilon$ is the characteristic size of the microscopic fluctuations in the heterogeneous media, and $\eta$ is the size of the microscopic domain. This so-called boundary, or ``cell resonance" error can dominate discretization error and pollute the entire homogenization scheme. There exist several techniques in the literature to reduce the error. Most strategies involve modifying the form of the microscale cell problem. Below we present an alternative procedure based on the observation that the resonance error itself is an oscillatory function of domain size $\eta$. After rigorously characterizing the oscillatory behavior for one dimensional and quasi-one dimensional microscale domains, we present a novel strategy to reduce the resonance error. Rather than modifying the form of the cell problem, the original problem is solved for a sequence of domain sizes, and the results are averaged against kernels satisfying certain moment conditions and regularity properties. Numerical examples in one and two dimensions illustrate the utility of the approach.
Certifying the positivity of trigonometric polynomials is of first importance for design problems in discrete-time signal processing. It is well known from the Riesz-Fej\'ez spectral factorization theorem that any trigonometric univariate polynomial positive on the unit circle can be decomposed as a Hermitian square with complex coefficients. Here we focus on the case of polynomials with Gaussian integer coefficients, i.e., with real and imaginary parts being integers. We design, analyze and compare, theoretically and practically,three hybrid numeric-symbolic algorithms computing weighted sums of Hermitian squares decompositions for trigonometric univariate polynomials positive on the unit circle with Gaussian coefficients. The numerical steps the first and second algorithm rely on are complex root isolation and semidefinite programming, respectively. An exact sum of Hermitian squares decomposition is obtained thanks to compensation techniques. The third algorithm, also based on complex semidefinite programming, is an adaptation of the rounding and projection algorithm by Peyrl and Parrilo. For all three algorithms, we prove bit complexity and output size estimates that are polynomial in the degree of the input and linear in the maximum bitsize of its coefficients. We compare their performance on randomly chosen benchmarks, and further design a certified finite impulse filter.
We present a priori error estimates for a multirate time-stepping scheme for coupled differential equations. The discretization is based on Galerkin methods in time using two different time meshes for two parts of the problem. We aim at surface coupled multiphysics problems like two-phase flows. Special focus is on the handling of the interface coupling to guarantee a coercive formulation as key to optimal order error estimates. In a sequence of increasing complexity, we begin with the coupling of two ordinary differential equations, coupled heat conduction equation, and finally a coupled Stokes problem. For this we show optimal multi-rate estimates in velocity and a suboptimal result in pressure. The a priori estimates prove that the multirate method decouples the two subproblems exactly. This is the basis for adaptive methods which can choose optimal lattices for the respective subproblems.
Agent-based models are widely used to predict infectious disease spread. For these predictions, one needs to understand how each input parameter affects the result. Here, some parameters may affect the sensitivities of others, requiring the analysis of higher order coefficients through e.g. Sobol sensitivity analysis. The geographical structures of real-world regions are distinct in that they are difficult to reduce to single parameter values, making a unified sensitivity analysis intractable. Yet analyzing the importance of geographical structure on the sensitivity of other input parameters is important because a strong effect would justify the use of models with real-world geographical representations, as opposed to stylized ones. Here we perform a grouped Sobol's sensitivity analysis on COVID-19 spread simulations across a set of three diverse real-world geographical representations. We study the differences in both results and the sensitivity of non-geographical parameters across these geographies. By comparing Sobol indices of parameters across geographies, we find evidence that infection rate could have more sensitivity in regions where the population is segregated, while parameters like recovery period of mild cases are more sensitive in regions with mixed populations. We also show how geographical structure affects parameter sensitivity changes over time.
The elapsed time equation is an age-structured model that describes dynamics of interconnected spiking neurons through the elapsed time since the last discharge, leading to many interesting questions on the evolution of the system from a mathematical and biological point of view. In this work, we first deal with the case when transmission after a spike is instantaneous and the case when there exists a distributed delay that depends on previous history of the system, which is a more realistic assumption. Then we study the well-posedness and the numerical analysis of the elapsed time models. For existence and uniqueness we improve the previous works by relaxing some hypothesis on the nonlinearity, including the strongly excitatory case, while for the numerical analysis we prove that the approximation given by the explicit upwind scheme converges to the solution of the non-linear problem. We also show some numerical simulations to compare the behavior of the system in the case of instantaneous transmission with the case of distributed delay under different parameters, leading to solutions with different asymptotic profiles.
Fitted finite element methods are constructed for a singularly perturbed convection-diffusion problem in two space dimensions. Exponential splines as basis functions are combined with Shishkin meshes to obtain a stable parameter-uniform numerical method. These schemes satisfy a discrete maximum principle. In the classical case, the numerical approximations converge, in the maximum pointwise norm, at a rate of second order and the approximations converge at a rate of first order for all values of the singular perturbation parameter.
Nowadays, numerical models are widely used in most of engineering fields to simulate the behaviour of complex systems, such as for example power plants or wind turbine in the energy sector. Those models are nevertheless affected by uncertainty of different nature (numerical, epistemic) which can affect the reliability of their predictions. We develop here a new method for quantifying conditional parameter uncertainty within a chain of two numerical models in the context of multiphysics simulation. More precisely, we aim to calibrate the parameters $\theta$ of the second model of the chain conditionally on the value of parameters $\lambda$ of the first model, while assuming the probability distribution of $\lambda$ is known. This conditional calibration is carried out from the available experimental data of the second model. In doing so, we aim to quantify as well as possible the impact of the uncertainty of $\lambda$ on the uncertainty of $\theta$. To perform this conditional calibration, we set out a nonparametric Bayesian formalism to estimate the functional dependence between $\theta$ and $\lambda$, denoted by $\theta(\lambda)$. First, each component of $\theta(\lambda)$ is assumed to be the realization of a Gaussian process prior. Then, if the second model is written as a linear function of $\theta(\lambda)$, the Bayesian machinery allows us to compute analytically the posterior predictive distribution of $\theta(\lambda)$ for any set of realizations $\lambda$. The effectiveness of the proposed method is illustrated on several analytical examples.
Semitopologies model consensus in distributed system by equating the notion of a quorum -- a set of participants sufficient to make local progress -- with that of an open set. This yields a topology-like theory of consensus, but semitopologies generalise topologies, since the intersection of two quorums need not necessarily be a quorum. The semitopological model of consensus is naturally heterogeneous and local, just like topologies can be heterogenous and local, and for the same reasons: points may have different quorums and there is no restriction that open sets / quorums be uniformly generated (e.g. open sets can be something other than two-thirds majorities of the points in the space). Semiframes are an algebraic abstraction of semitopologies. They are to semitopologies as frames are to topologies. We give a notion of semifilter, which plays a role analogous to filters, and show how to build a semiframe out of the open sets of a semitopology, and a semitopology out of the semifilters of a semiframe. We define suitable notions of category and morphism and prove a categorical duality between (sober) semiframes and (spatial) semitopologies, and investigate well-behavedness properties on semitopologies and semiframes across the duality. Surprisingly, the structure of semiframes is not what one might initially expect just from looking at semitopologies, and the canonical structure required for the duality result -- a compatibility relation *, generalising sets intersection -- is also canonical for expressing well-behavedness properties. Overall, we deliver a new categorical, algebraic, abstract framework within which to study consensus on distributed systems, and which is also simply interesting to consider as a mathematical theory in its own right.
I propose an alternative algorithm to compute the MMS voting rule. Instead of using linear programming, in this new algorithm the maximin support value of a committee is computed using a sequence of maximum flow problems.
Gaussian processes (GPs) are popular nonparametric statistical models for learning unknown functions and quantifying the spatiotemporal uncertainty in data. Recent works have extended GPs to model scalar and vector quantities distributed over non-Euclidean domains, including smooth manifolds appearing in numerous fields such as computer vision, dynamical systems, and neuroscience. However, these approaches assume that the manifold underlying the data is known, limiting their practical utility. We introduce RVGP, a generalisation of GPs for learning vector signals over latent Riemannian manifolds. Our method uses positional encoding with eigenfunctions of the connection Laplacian, associated with the tangent bundle, readily derived from common graph-based approximation of data. We demonstrate that RVGP possesses global regularity over the manifold, which allows it to super-resolve and inpaint vector fields while preserving singularities. Furthermore, we use RVGP to reconstruct high-density neural dynamics derived from low-density EEG recordings in healthy individuals and Alzheimer's patients. We show that vector field singularities are important disease markers and that their reconstruction leads to a comparable classification accuracy of disease states to high-density recordings. Thus, our method overcomes a significant practical limitation in experimental and clinical applications.
We hypothesize that due to the greedy nature of learning in multi-modal deep neural networks, these models tend to rely on just one modality while under-fitting the other modalities. Such behavior is counter-intuitive and hurts the models' generalization, as we observe empirically. To estimate the model's dependence on each modality, we compute the gain on the accuracy when the model has access to it in addition to another modality. We refer to this gain as the conditional utilization rate. In the experiments, we consistently observe an imbalance in conditional utilization rates between modalities, across multiple tasks and architectures. Since conditional utilization rate cannot be computed efficiently during training, we introduce a proxy for it based on the pace at which the model learns from each modality, which we refer to as the conditional learning speed. We propose an algorithm to balance the conditional learning speeds between modalities during training and demonstrate that it indeed addresses the issue of greedy learning. The proposed algorithm improves the model's generalization on three datasets: Colored MNIST, Princeton ModelNet40, and NVIDIA Dynamic Hand Gesture.