This article introduces a new Neural Network stochastic model to generate a 1-dimensional stochastic field with turbulent velocity statistics. Both the model architecture and training procedure ground on the Kolmogorov and Obukhov statistical theories of fully developed turbulence, so guaranteeing descriptions of 1) energy distribution, 2) energy cascade and 3) intermittency across scales in agreement with experimental observations. The model is a Generative Adversarial Network with multiple multiscale optimization criteria. First, we use three physics-based criteria: the variance, skewness and flatness of the increments of the generated field that retrieve respectively the turbulent energy distribution, energy cascade and intermittency across scales. Second, the Generative Adversarial Network criterion, based on reproducing statistical distributions, is used on segments of different length of the generated field. Furthermore, to mimic multiscale decompositions frequently used in turbulence's studies, the model architecture is fully convolutional with kernel sizes varying along the multiple layers of the model. To train our model we use turbulent velocity signals from grid turbulence at Modane wind tunnel.
We describe a simple deterministic near-linear time approximation scheme for uncapacitated minimum cost flow in undirected graphs with real edge weights, a problem also known as transshipment. Specifically, our algorithm takes as input a (connected) undirected graph $G = (V, E)$, vertex demands $b \in \mathbb{R}^V$ such that $\sum_{v \in V} b(v) = 0$, positive edge costs $c \in \mathbb{R}_{>0}^E$, and a parameter $\varepsilon > 0$. In $O(\varepsilon^{-2} m \log^{O(1)} n)$ time, it returns a flow $f$ such that the net flow out of each vertex is equal to the vertex's demand and the cost of the flow is within a $(1 + \varepsilon)$ factor of optimal. Our algorithm is combinatorial and has no running time dependency on the demands or edge costs. With the exception of a recent result presented at STOC 2022 for polynomially bounded edge weights, all almost- and near-linear time approximation schemes for transshipment relied on randomization to embed the problem instance into low-dimensional space. Our algorithm instead deterministically approximates the cost of routing decisions that would be made if the input were subject to a random tree embedding. To avoid computing the $\Omega(n^2)$ vertex-vertex distances that an approximation of this kind suggests, we also take advantage of the clustering method used in the well-known Thorup-Zwick distance oracle.
Practical parameter identifiability in ODE-based epidemiological models is a known issue, yet one that merits further study. It is essentially ubiquitous due to noise and errors in real data. In this study, to avoid uncertainty stemming from data of unknown quality, simulated data with added noise are used to investigate practical identifiability in two distinct epidemiological models. Particular emphasis is placed on the role of initial conditions, which are assumed unknown, except those that are directly measured. Instead of just focusing on one method of estimation, we use and compare results from various broadly used methods, including maximum likelihood and Markov Chain Monte Carlo (MCMC) estimation. Among other findings, our analysis revealed that the MCMC estimator is overall more robust than the point estimators considered. Its estimates and predictions are improved when the initial conditions of certain compartments are fixed so that the model becomes globally identifiable. For the point estimators, whether fixing or fitting the that are not directly measured improves parameter estimates is model-dependent. Specifically, in the standard SEIR model, fixing the initial condition for the susceptible population S(0) improved parameter estimates, while this was not true when fixing the initial condition of the asymptomatic population in a more involved model. Our study corroborates the change in quality of parameter estimates upon usage of pre-peak or post-peak time-series under consideration. Finally, our examples suggest that in the presence of significantly noisy data, the value of structural identifiability is moot.
Structural identifiability is an important property of parametric ODE models. When conducting an experiment and inferring the parameter value from the time-series data, we want to know if the value is globally, locally, or non-identifiable. Global identifiability of the parameter indicates that there exists only one possible solution to the inference problem, local identifiability suggests that there could be several (but finitely many) possibilities, while non-identifiability implies that there are infinitely many possibilities for the value. Having this information is useful since, one would, for example, only perform inferences for the parameters which are identifiable. Given the current significance and widespread research conducted in this area, we decided to create a database of linear compartment models and their identifiability results. This facilitates the process of checking theorems and conjectures and drawing conclusions on identifiability. By only storing models up to symmetries and isomorphisms, we optimize memory efficiency and reduce query time. We conclude by applying our database to real problems. We tested a conjecture about deleting one leak of the model states in the paper 'Linear compartmental models: Input-output equations and operations that preserve identifiability' by E. Gross et al., and managed to produce a counterexample. We also compute some interesting statistics related to the identifiability of linear compartment model parameters.
This work concerns the analysis of the discontinuous Galerkin spectral element method (DGSEM) with implicit time stepping for the numerical approximation of nonlinear scalar conservation laws in multiple space dimensions. We consider either the DGSEM with a backward Euler time stepping, or a space-time DGSEM discretization to remove the restriction on the time step. We design graph viscosities in space, and in time for the space-time DGSEM, to make the schemes maximum principle preserving and entropy stable for every admissible convex entropy. We also establish well-posedness of the discrete problems by showing existence and uniqueness of the solutions to the nonlinear implicit algebraic relations that need to be solved at each time step. Numerical experiments in one space dimension are presented to illustrate the properties of these schemes.
We present a method for computing nearly singular integrals that occur when single or double layer surface integrals, for harmonic potentials or Stokes flow, are evaluated at points nearby. Such values could be needed in solving an integral equation when one surface is close to another or to obtain values at grid points. We replace the singular kernel with a regularized version having a length parameter $\delta$ in order to control discretization error. Analysis near the singularity leads to an expression for the error due to regularization which has terms with unknown coefficients multiplying known quantities. By computing the integral with three choices of $\delta$ we can solve for an extrapolated value that has regularization error reduced to $O(\delta^5)$, uniformly for target points on or near the surface. In examples with $\delta/h$ constant and moderate resolution we observe total error about $O(h^5)$ close to the surface. For convergence as $h \to 0$ we can choose $\delta$ proportional to $h^q$ with $q < 1$ to ensure the discretization error is dominated by the regularization error. With $q = 4/5$ we find errors about $O(h^4)$. For harmonic potentials we extend the approach to a version with $O(\delta^7)$ regularization; it typically has smaller errors but the order of accuracy is less predictable.
We design a fully implementable scheme to compute the invariant distribution of ergodic McKean-Vlasov SDE satisfying a uniform confluence property. Under natural conditions, we prove various convergence results notably we obtain rates for the Wasserstein distance in quadratic mean and almost sure sense.
We propose a framework to perform Bayesian inference using conditional score-based diffusion models to solve a class of inverse problems in mechanics involving the inference of a specimen's spatially varying material properties from noisy measurements of its mechanical response to loading. Conditional score-based diffusion models are generative models that learn to approximate the score function of a conditional distribution using samples from the joint distribution. More specifically, the score functions corresponding to multiple realizations of the measurement are approximated using a single neural network, the so-called score network, which is subsequently used to sample the posterior distribution using an appropriate Markov chain Monte Carlo scheme based on Langevin dynamics. Training the score network only requires simulating the forward model. Hence, the proposed approach can accommodate black-box forward models and complex measurement noise. Moreover, once the score network has been trained, it can be re-used to solve the inverse problem for different realizations of the measurements. We demonstrate the efficacy of the proposed approach on a suite of high-dimensional inverse problems in mechanics that involve inferring heterogeneous material properties from noisy measurements. Some examples we consider involve synthetic data, while others include data collected from actual elastography experiments. Further, our applications demonstrate that the proposed approach can handle different measurement modalities, complex patterns in the inferred quantities, non-Gaussian and non-additive noise models, and nonlinear black-box forward models. The results show that the proposed framework can solve large-scale physics-based inverse problems efficiently.
We studied the dynamical properties of Rabi oscillations driven by an alternating Rashba field applied to a two-dimensional (2D) harmonic confinement system. We solve the time-dependent (TD) Schr\"{o}dinger equation numerically and rewrite the resulting TD wavefunction onto the Bloch sphere (BS) using two BS parameters of the zenith ($\theta_B$) and azimuthal ($\phi_B$) angles, extracting the phase information $\phi_B$ as well as the mixing ratio $\theta_B$ between the two BS-pole states. We employed a two-state rotating wave (TSRW) approach and studied the fundamental features of $\theta_B$ and $\phi_B$ over time. The TSRW approach reveals a triangular wave formation in $\theta_B$. Moreover, at each apex of the triangular wave, the TD wavefunction passes through the BS pole, and the state is completely replaced by the opposite spin state. The TSRW approach also elucidates a linear change in $\phi_B$. The slope of $\phi_B$ vs. time is equal to the difference between the dynamical terms, leading to a confinement potential in the harmonic system. The TSRW approach further demonstrates a jump in the phase difference by $\pi$ when the wavefunction passes through the BS pole. The alternating Rashba field causes multiple successive Rabi transitions in the 2D harmonic system. We then introduce the effective BS (EBS) and transform these complicated transitions into an equivalent "single" Rabi one. Consequently, the EBS parameters $\theta_B^{\mathrm{eff}}$ and $\phi_B^{\mathrm{eff}}$ exhibit mixing and phase difference between two spin states $\alpha$ and $\beta$, leading to a deep understanding of the TD features of multi-Rabi oscillations. Furthermore, the combination of the BS representation with the TSRW approach successfully reveals the dynamical properties of the Rabi oscillation, even beyond the TSRW approximation.
We use Stein characterisations to derive new moment-type estimators for the parameters of several truncated multivariate distributions in the i.i.d. case; we also derive the asymptotic properties of these estimators. Our examples include the truncated multivariate normal distribution and truncated products of independent univariate distributions. The estimators are explicit and therefore provide an interesting alternative to the maximum-likelihood estimator (MLE). The quality of these estimators is assessed through competitive simulation studies, in which we compare their behaviour to the performance of the MLE and the score matching approach.
We propose a novel and simple spectral method based on the semi-discrete Fourier transforms to discretize the fractional Laplacian $(-\Delta)^\frac{\alpha}{2}$. Numerical analysis and experiments are provided to study its performance. Our method has the same symbol $|\boldsymbol\xi|^\alpha$ as the fractional Laplacian $(-\Delta)^\frac{\alpha}{2}$ at the discrete level, and thus it can be viewed as the exact discrete analogue of the fractional Laplacian. This {\it unique feature} distinguishes our method from other existing methods for the fractional Laplacian. Note that our method is different from the Fourier pseudospectral methods in the literature which are usually limited to periodic boundary conditions (see Remark \ref{remark0}). Numerical analysis shows that our method can achieve a spectral accuracy. The stability and convergence of our method in solving the fractional Poisson equations were analyzed. Our scheme yields a multilevel Toeplitz stiffness matrix, and thus fast algorithms can be developed for efficient matrix-vector multiplications. The computational complexity is ${\mathcal O}(2N\log(2N))$, and the memory storage is ${\mathcal O}(N)$ with $N$ the total number of points. Extensive numerical experiments verify our analytical results and demonstrate the effectiveness of our method in solving various problems.