We present a Heterogeneous Multiscale Method for the Landau-Lifshitz equation with a highly oscillatory diffusion coefficient, a simple model for a ferromagnetic composite. A finite element macro scheme is combined with a finite difference micro model to approximate the effective equation corresponding to the original problem. This makes it possible to obtain effective solutions to problems with rapid material variations on a small scale, described by $\varepsilon \ll 1$, which would be too expensive to resolve in a conventional simulation.
In this article, a numerical scheme to find approximate solutions to the McKendrick-Von Foerster equation with diffusion (M-V-D) is presented. The main difficulty in employing the standard analysis to study the properties of this scheme is due to presence of nonlinear and nonlocal term in the Robin boundary condition in the M-V-D. To overcome this, we use the abstract theory of discretizations based on the notion of stability threshold to analyze the scheme. Stability, and convergence of the proposed numerical scheme are established.
In analyzing large scale structures it is necessary to take into account the fine scale heterogeneity for accurate failure prediction. Resolving fine scale features in the numerical model drastically increases the number of degrees of freedom, thus making full fine-scale simulations infeasible, especially in cases where the model needs to be evaluated many times. In this paper, a methodology for fine scale modeling of large scale structures is proposed, which combines the variational multiscale method, domain decomposition and model order reduction. Addressing applications where the assumption of scale separation does not hold, the influence of the fine scale on the coarse scale is modelled directly by the use of an additive split of the displacement field. Possible coarse and fine scale solutions are exploited for a representative volume element (RVE) to construct local approximation spaces. The local spaces are designed such that local contributions of RVE subdomains can be coupled in a conforming way. Therefore, the resulting global system of equations takes the effect of the fine scale on the coarse scale into account, is sparse and reduced in size compared to the full order model. Several numerical experiments show the accuracy and efficiency of the method.
Numerical solving differential equations with fractional derivatives requires elimination of the singularity which is inherent in the standard definition of fractional derivatives. The method of integration by parts to eliminate this singularity is well known. It allows to solve some equations but increases the order of the equation and sometimes leads to wrong numerical results or instability. We suggest another approach: the elimination of singularity by substitution. It does not increase the order of equation and its numerical implementation provides the opportunity to define fractional derivative as the limit of discretization. We present a sufficient condition for the substitution-generated difference approximation to be well-conditioned. We demonstrate how some equations can be solved using this method with full confidence that the solution is accurate with at least second order of approximation.
We demonstrate the effectiveness of an adaptive explicit Euler method for the approximate solution of the Cox-Ingersoll-Ross model. This relies on a class of path-bounded timestepping strategies which work by reducing the stepsize as solutions approach a neighbourhood of zero. The method is hybrid in the sense that a convergent backstop method is invoked if the timestep becomes too small, or to prevent solutions from overshooting zero and becoming negative. Under parameter constraints that imply Feller's condition, we prove that such a scheme is strongly convergent, of order at least 1/2. Control of the strong error is important for multi-level Monte Carlo techniques. Under Feller's condition we also prove that the probability of ever needing the backstop method to prevent a negative value can be made arbitrarily small. Numerically, we compare this adaptive method to fixed step implicit and explicit schemes, and a novel semi-implicit adaptive variant. We observe that the adaptive approach leads to methods that are competitive in a domain that extends beyond Feller's condition, indicating suitability for the modelling of stochastic volatility in Heston-type asset models.
We present a novel methodology based on filtered data and moving averages for estimating effective dynamics from observations of multiscale systems. We show in a semi-parametric framework of the Langevin type that our approach is asymptotically unbiased with respect to the theory of homogenization. Moreover, we demonstrate on a range of challenging numerical experiments that our method is accurate in extracting coarse-grained dynamics from multiscale data. In particular, the estimators we propose are more robust and require less knowledge of the full model than the standard technique of subsampling, which is widely employed in practice in this setting.
We study synchronous Q-learning with Polyak-Ruppert averaging (a.k.a., averaged Q-learning) in a $\gamma$-discounted MDP. We establish a functional central limit theorem (FCLT) for the averaged iteration $\bar{\boldsymbol{Q}}_T$ and show its standardized partial-sum process weakly converges to a rescaled Brownian motion. Furthermore, we show that $\bar{\boldsymbol{Q}}_T$ is actually a regular asymptotically linear (RAL) estimator for the optimal Q-value function $\boldsymbol{Q}^*$ with the most efficient influence function. This implies the averaged Q-learning iteration has the smallest asymptotic variance among all RAL estimators. In addition, we present a non-asymptotic analysis for the $\ell_{\infty}$ error $\mathbb{E}\|\bar{\boldsymbol{Q}}_T-\boldsymbol{Q}^*\|_{\infty}$, showing for polynomial step sizes it matches the instance-dependent lower bound as well as the optimal minimax complexity lower bound. In short, our theoretical analysis shows averaged Q-learning is statistically efficient.
We develop a stable finite difference method for the elastic wave equation in bounded media, where the material properties can be discontinuous at curved interfaces. The governing equation is discretized in second order form by a fourth or sixth order accurate summation-by-parts operator. The mesh size is determined by the velocity structure of the material, resulting in nonconforming grid interfaces with hanging nodes. We use order-preserving interpolation and the ghost point technique to couple adjacent mesh blocks in an energy-conserving manner, which is supported by a fully discrete stability analysis. In our previous work for the wave equation, two pairs of order-preserving interpolation operators are needed when imposing the interface conditions weakly by a penalty technique. Here, we only use one pair in the ghost point method. In numerical experiments, we demonstrate that the convergence rate is optimal, and is the same as when a globally uniform mesh is used in a single domain. In addition, with a predictor-corrector time integration method, we obtain time stepping stability with stepsize almost the same as given by the usual Courant-Friedrichs-Lewy condition.
Running machine learning algorithms on large and rapidly growing volumes of data is often computationally expensive, one common trick to reduce the size of a data set, and thus reduce the computational cost of machine learning algorithms, is \emph{probability sampling}. It creates a sampled data set by including each data point from the original data set with a known probability. Although the benefit of running machine learning algorithms on the reduced data set is obvious, one major concern is that the performance of the solution obtained from samples might be much worse than that of the optimal solution when using the full data set. In this paper, we examine the performance loss caused by probability sampling in the context of adaptive submodular maximization. We consider a simple probability sampling method which selects each data point with probability at least $r\in[0,1]$. If we set $r=1$, our problem reduces to finding a solution based on the original full data set. We define sampling gap as the largest ratio between the optimal solution obtained from the full data set and the optimal solution obtained from the samples, over independence systems. Our main contribution is to show that if the sampling probability of each data point is at least $r$ and the utility function is policywise submodular, then the sampling gap is both upper bounded and lower bounded by $1/r$. We show that the property of policywise submodular can be found in a wide range of real-world applications, including pool-based active learning and adaptive viral marketing.
We present and investigate a new type of implicit fractional linear multistep method of order two for fractional initial value problems. The method is obtained from the second order super convergence of the Gr\"unwald-Letnikov approximation of the fractional derivative at a non-integer shift point. The proposed method is of order two consistency and coincides with the backward difference method of order two for classical initial value problems when the order of the derivative is one. The weight coefficients of the proposed method are obtained from the Gr\"unwald weights and hence computationally efficient compared with that of the fractional backward difference formula of order two. The stability properties are analyzed and shown that the stability region of the method is larger than that of the fractional Adams-Moulton method of order two and the fractional trapezoidal method. Numerical result and illustrations are presented to justify the analytical theories.
We derive a posteriori error estimates for a fully discrete finite element approximation of the stochastic Cahn-Hilliard equation. The a posteriori bound is obtained by a splitting of the equation into a linear stochastic partial differential equation (SPDE) and a nonlinear random partial differential equation (RPDE). The resulting estimate is robust with respect to the interfacial width parameter and is computable since it involves the discrete principal eigenvalue of a linearized (stochastic) Cahn-Hilliard operator. Furthermore, the estimate is robust with respect to topological changes as well as the intensity of the stochastic noise. We provide numerical simulations to demonstrate the practicability of the proposed adaptive algorithm.