The design of numerical tools to model the behavior of building materials is a challenging task. The crucial point is to save computational cost and maintain high accuracy of predictions. There are two main limitations on the time scale choice, which put an obstacle to solve the above issues. First one is the numerical restriction. A number of research is dedicated to overcome this limitation and it is shown that it can be relaxed with innovative numerical schemes. The second one is the physical restriction. It is imposed by properties of a material, phenomena itself and corresponding boundary conditions. This work is focused on the study of a methodology that enables to overcome the physical restriction on the time grid. So-called Average Reduced Model (ARM) is suggested. It is based on smoothing the time-dependent boundary conditions. Besides, the approximate solution is decomposed into average and fluctuating components. The primer is obtained by integrating the equations over time, whereas the latter is an user-defined empirical model. The methodology is investigated for both heat diffusion and coupled heat and mass transfer. It is demonstrated that the signal core of the boundary conditions is preserved and the physical restriction can be relaxed. The model proved to be reliable, accurate and efficient also in comparison with the experimental data of two years. The implementation of the scarce time-step of $1 \, \sf{h}$ is justified. It is shown, that by maintaining the tolerable error it is possible to cut computational effort up to almost four times in comparison with the complete model with the same time grid.
Tilt-series alignment is crucial to obtaining high-resolution reconstructions in cryo-electron tomography. Beam-induced local deformation of the sample is hard to estimate from the low-contrast sample alone, and often requires fiducial gold bead markers. The state-of-the-art approach for deformation estimation uses (semi-)manually labelled marker locations in projection data to fit the parameters of a polynomial deformation model. Manually-labelled marker locations are difficult to obtain when data are noisy or markers overlap in projection data. We propose an alternative mathematical approach for simultaneous marker localization and deformation estimation by extending a grid-free super-resolution algorithm first proposed in the context of single-molecule localization microscopy. Our approach does not require labelled marker locations; instead, we use an image-based loss where we compare the forward projection of markers with the observed data. We equip this marker localization scheme with an additional deformation estimation component and solve for a reduced number of deformation parameters. Using extensive numerical studies on marker-only samples, we show that our approach automatically finds markers and reliably estimates sample deformation without labelled marker data. We further demonstrate the applicability of our approach for a broad range of model mismatch scenarios, including experimental electron tomography data of gold markers on ice.
We study sparse linear regression over a network of agents, modeled as an undirected graph and no server node. The estimation of the $s$-sparse parameter is formulated as a constrained LASSO problem wherein each agent owns a subset of the $N$ total observations. We analyze the convergence rate and statistical guarantees of a distributed projected gradient tracking-based algorithm under high-dimensional scaling, allowing the ambient dimension $d$ to grow with (and possibly exceed) the sample size $N$. Our theory shows that, under standard notions of restricted strong convexity and smoothness of the loss functions, suitable conditions on the network connectivity and algorithm tuning, the distributed algorithm converges globally at a {\it linear} rate to an estimate that is within the centralized {\it statistical precision} of the model, $O(s\log d/N)$. When $s\log d/N=o(1)$, a condition necessary for statistical consistency, an $\varepsilon$-optimal solution is attained after $\mathcal{O}(\kappa \log (1/\varepsilon))$ gradient computations and $O (\kappa/(1-\rho) \log (1/\varepsilon))$ communication rounds, where $\kappa$ is the restricted condition number of the loss function and $\rho$ measures the network connectivity. The computation cost matches that of the centralized projected gradient algorithm despite having data distributed; whereas the communication rounds reduce as the network connectivity improves. Overall, our study reveals interesting connections between statistical efficiency, network connectivity \& topology, and convergence rate in high dimensions.
For the approximation and simulation of twofold iterated stochastic integrals and the corresponding L\'{e}vy areas w.r.t. a multi-dimensional Wiener process, we review four algorithms based on a Fourier series approach. Especially, the very efficient algorithm due to Wiktorsson and a newly proposed algorithm due to Mrongowius and R\"ossler are considered. To put recent advances into context, we analyse the four Fourier-based algorithms in a unified framework to highlight differences and similarities in their derivation. A comparison of theoretical properties is complemented by a numerical simulation that reveals the order of convergence for each algorithm. Further, concrete instructions for the choice of the optimal algorithm and parameters for the simulation of solutions for stochastic (partial) differential equations are given. Additionally, we provide advice for an efficient implementation of the considered algorithms and incorporated these insights into an open source toolbox that is freely available for both Julia and MATLAB programming languages. The performance of this toolbox is analysed by comparing it to some existing implementations, where we observe a significant speed-up.
The extreme or maximum age of information (AoI) is analytically studied for wireless communication systems. In particular, a wireless powered single-antenna source node and a receiver (connected to the power grid) equipped with multiple antennas are considered when operated under independent Rayleigh-faded channels. Via the extreme value theory and its corresponding statistical features, we demonstrate that the extreme AoI converges to the Gumbel distribution whereas its corresponding parameters are obtained in straightforward closed-form expressions. Capitalizing on this result, the risk of the extreme AoI realization is analytically evaluated according to some relevant performance metrics, while some useful engineering insights are manifested.
When applied to stiff, linear differential equations with time-dependent forcing, Runge-Kutta methods can exhibit convergence rates lower than predicted by classical order condition theory. Commonly, this order reduction phenomenon is addressed by using an expensive, fully implicit Runge-Kutta method with high stage order or a specialized scheme satisfying additional order conditions. This work develops a flexible approach of augmenting an arbitrary Runge-Kutta method with a fully implicit method used to treat the forcing such as to maintain the classical order of the base scheme. Our methods and analyses are based on the general-structure additive Runge-Kutta framework. Numerical experiments using diagonally implicit, fully implicit, and even explicit Runge-Kutta methods confirm that the new approach eliminates order reduction for the class of problems under consideration, and the base methods achieve their theoretical orders of convergence.
We consider parametric estimation and tests for multi-dimensional diffusion processes with a small dispersion parameter $\varepsilon$ from discrete observations. For parametric estimation of diffusion processes, the main target is to estimate the drift parameter and the diffusion parameter. In this paper, we propose two types of adaptive estimators for both parameters and show their asymptotic properties under $\varepsilon\to0$, $n\to\infty$ and the balance condition that $(\varepsilon n^\rho)^{-1} =O(1)$ for some $\rho>0$. Using these adaptive estimators, we also introduce consistent adaptive testing methods and prove that test statistics for adaptive tests have asymptotic distributions under null hypothesis. In simulation studies, we examine and compare asymptotic behaviors of the two kinds of adaptive estimators and test statistics. Moreover, we treat the SIR model which describes a simple epidemic spread for a biological application.
We consider an adaptive multiresolution-based lattice Boltzmann scheme, which we have recently introduced and studied from the perspective of the error control and the theory of the equivalent equations. This numerical strategy leads to high compression rates, error control and its high accuracy has been explained on uniform and dynamically adaptive grids. However, one key issue with non-uniform meshes within the framework of lattice Boltzmann schemes is to properly handle acoustic waves passing through a level jump of the grid. It usually yields spurious effects, in particular reflected waves. In this paper, we propose a simple mono-dimensional test-case for the linear wave equation with a fixed adapted mesh characterized by a potentially large level jump. We investigate this configuration with our original strategy and prove that we can handle and control the amplitude of the reflected wave, which is of fourth order in the space step of the finest mesh. Numerical illustrations show that the proposed strategy outperforms the existing methods in the literature and allow to assess the ability of the method to handle the mesh jump properly.
With advances in scientific computing and mathematical modeling, complex phenomena can now be reliably simulated. Such simulations can however be very time-intensive, requiring millions of CPU hours to perform. One solution is multi-fidelity emulation, which uses data of varying accuracies (or fidelities) to train an efficient predictive model (or emulator) for the expensive simulator. In complex problems, simulation data with different fidelities are often connected scientifically via a directed acyclic graph (DAG), which cannot be integrated within existing multi-fidelity emulator models. We thus propose a new Graphical Multi-fidelity Gaussian process (GMGP) model, which embeds this scientific DAG information within a Gaussian process framework. We show that the GMGP has desirable modeling traits via two Markov properties, and admits a scalable formulation for recursively computing the posterior predictive distribution along sub-graphs. We also present an experimental design framework over the DAG given a computational budget, and propose a nonlinear extension of the GMGP model via deep Gaussian processes. The advantages of the GMGP model over existing methods are then demonstrated via a suite of numerical experiments and an application to emulation of heavy-ion collisions, which can be used to study the conditions of matter in the Universe shortly after the Big Bang.
Despite the state-of-the-art performance for medical image segmentation, deep convolutional neural networks (CNNs) have rarely provided uncertainty estimations regarding their segmentation outputs, e.g., model (epistemic) and image-based (aleatoric) uncertainties. In this work, we analyze these different types of uncertainties for CNN-based 2D and 3D medical image segmentation tasks. We additionally propose a test-time augmentation-based aleatoric uncertainty to analyze the effect of different transformations of the input image on the segmentation output. Test-time augmentation has been previously used to improve segmentation accuracy, yet not been formulated in a consistent mathematical framework. Hence, we also propose a theoretical formulation of test-time augmentation, where a distribution of the prediction is estimated by Monte Carlo simulation with prior distributions of parameters in an image acquisition model that involves image transformations and noise. We compare and combine our proposed aleatoric uncertainty with model uncertainty. Experiments with segmentation of fetal brains and brain tumors from 2D and 3D Magnetic Resonance Images (MRI) showed that 1) the test-time augmentation-based aleatoric uncertainty provides a better uncertainty estimation than calculating the test-time dropout-based model uncertainty alone and helps to reduce overconfident incorrect predictions, and 2) our test-time augmentation outperforms a single-prediction baseline and dropout-based multiple predictions.
In this paper, we propose a nonlinear distance metric learning scheme based on the fusion of component linear metrics. Instead of merging displacements at each data point, our model calculates the velocities induced by the component transformations, via a geodesic interpolation on a Lie transfor- mation group. Such velocities are later summed up to produce a global transformation that is guaranteed to be diffeomorphic. Consequently, pair-wise distances computed this way conform to a smooth and spatially varying metric, which can greatly benefit k-NN classification. Experiments on synthetic and real datasets demonstrate the effectiveness of our model.