This article is concerned with solving the time fractional Vakhnenko Parkes equation using the reproducing kernels. Reproducing kernel theory, the normal basis, some important Hilbert spaces, homogenization of constraints, and the orthogonalization process are the main tools of this technique. The main advantage of reproducing kernel method is it is truly meshless. The solutions obtained by the implementation reproducing kernels Hilbert space method on the time-fractional Vakhnenko Parkes equation is in the form of a series. The obtained solution converges to the exact solution uniquely. It is observed that the implemented method is highly effective. The effectiveness of reproducing kernel Hilbert space method is presented through the tables and graphs. The perfectness of this method is tested by taking different error norms and the order of convergence of the errors.
This work introduces a novel cause-effect relation in Markov decision processes using the probability-raising principle. Initially, sets of states as causes and effects are considered, which is subsequently extended to regular path properties as effects and then as causes. The paper lays the mathematical foundations and analyzes the algorithmic properties of these cause-effect relations. This includes algorithms for checking cause conditions given an effect and deciding the existence of probability-raising causes. As the definition allows for sub-optimal coverage properties, quality measures for causes inspired by concepts of statistical analysis are studied. These include recall, coverage ratio and f-score. The computational complexity for finding optimal causes with respect to these measures is analyzed.
In this paper, we consider the two-sample location shift model, a classic semiparametric model introduced by Stein (1956). This model is known for its adaptive nature, enabling nonparametric estimation with full parametric efficiency. Existing nonparametric estimators of the location shift often depend on external tuning parameters, which restricts their practical applicability (Van der Vaart and Wellner, 2021). We demonstrate that introducing an additional assumption of log-concavity on the underlying density can alleviate the need for tuning parameters. We propose a one step estimator for location shift estimation, utilizing log-concave density estimation techniques to facilitate tuning-free estimation of the efficient influence function. While we employ a truncated version of the one step estimator for theoretical adaptivity, our simulations indicate that the one step estimators perform best with zero truncation, eliminating the need for tuning during practical implementation.
Of all the possible projection methods for solving large-scale Lyapunov matrix equations, Galerkin approaches remain much more popular than Petrov-Galerkin ones. This is mainly due to the different nature of the projected problems stemming from these two families of methods. While a Galerkin approach leads to the solution of a low-dimensional matrix equation per iteration, a matrix least-squares problem needs to be solved per iteration in a Petrov-Galerkin setting. The significant computational cost of these least-squares problems has steered researchers towards Galerkin methods in spite of the appealing minimization properties of Petrov-Galerkin schemes. In this paper we introduce a framework that allows for modifying the Galerkin approach by low-rank, additive corrections to the projected matrix equation problem with the two-fold goal of attaining monotonic convergence rates similar to those of Petrov-Galerkin schemes while maintaining essentially the same computational cost of the original Galerkin method. We analyze the well-posedness of our framework and determine possible scenarios where we expect the residual norm attained by two low-rank-modified variants to behave similarly to the one computed by a Petrov-Galerkin technique. A panel of diverse numerical examples shows the behavior and potential of our new approach.
Most studies of adaptive algorithm behavior consider performance measures based on mean values such as the mean-square error. The derived models are useful for understanding the algorithm behavior under different environments and can be used for design. Nevertheless, from a practical point of view, the adaptive filter user has only one realization of the algorithm to obtain the desired result. This letter derives a model for the variance of the squared-error sample curve of the least-mean-square (LMS) adaptive algorithm, so that the achievable cancellation level can be predicted based on the properties of the steady-state squared error. The derived results provide the user with useful design guidelines.
We analyze the wave equation in mixed form, with periodic and/or Dirichlet homogeneous boundary conditions, and nonconstant coefficients that depend on the spatial variable. For the discretization, the weak form of the second equation is replaced by a strong form, written in terms of a projection operator. The system of equations is discretized with B-splines forming a De Rham complex along with suitable commutative projectors for the approximation of the second equation. The discrete scheme is energy conservative when discretized in time with a conservative method such as Crank-Nicolson. We propose a convergence analysis of the method to study the dependence with respect to the mesh size $h$, with focus on the consistency error. Numerical results show optimal convergence of the error in energy norm, and a relative error in energy conservation for long-time simulations of the order of machine precision.
Climate hazards can cause major disasters when they occur simultaneously as compound hazards. To understand the distribution of climate risk and inform adaptation policies, scientists need to simulate a large number of physically realistic and spatially coherent events. Current methods are limited by computational constraints and the probabilistic spatial distribution of compound events is not given sufficient attention. The bottleneck in current approaches lies in modelling the dependence structure between variables, as inference on parametric models suffers from the curse of dimensionality. Generative adversarial networks (GANs) are well-suited to such a problem due to their ability to implicitly learn the distribution of data in high-dimensional settings. We employ a GAN to model the dependence structure for daily maximum wind speed, significant wave height, and total precipitation over the Bay of Bengal, combining this with traditional extreme value theory for controlled extrapolation of the tails. Once trained, the model can be used to efficiently generate thousands of realistic compound hazard events, which can inform climate risk assessments for climate adaptation and disaster preparedness. The method developed is flexible and transferable to other multivariate and spatial climate datasets.
High-dimensional variable selection, with many more covariates than observations, is widely documented in standard regression models, but there are still few tools to address it in non-linear mixed-effects models where data are collected repeatedly on several individuals. In this work, variable selection is approached from a Bayesian perspective and a selection procedure is proposed, combining the use of a spike-and-slab prior and the Stochastic Approximation version of the Expectation Maximisation (SAEM) algorithm. Similarly to Lasso regression, the set of relevant covariates is selected by exploring a grid of values for the penalisation parameter. The SAEM approach is much faster than a classical MCMC (Markov chain Monte Carlo) algorithm and our method shows very good selection performances on simulated data. Its flexibility is demonstrated by implementing it for a variety of nonlinear mixed effects models. The usefulness of the proposed method is illustrated on a problem of genetic markers identification, relevant for genomic-assisted selection in plant breeding.
We study damped wave propagation problems phrased as abstract evolution equations in Hilbert spaces. Under some general assumptions, including a natural compatibility condition for initial values, we establish exponential decay estimates for all mild solutions using the language and tools of Hilbert complexes. This framework turns out strong enough to conduct our analysis but also general enough to include a number of interesting examples. Some of these are briefly discussed. By a slight modification of the main arguments, we also obtain corresponding decay results for numerical approximations obtained by compatible discretization strategies.
Tensor operations play an essential role in various fields of science and engineering, including multiway data analysis. In this study, we establish a few basic properties of the range and null space of a tensor using block circulant matrices and the discrete Fourier matrix. We then discuss the outer inverse of tensors based on $t$-product with a prescribed range and kernel of third-order tensors. We address the relation of this outer inverse with other generalized inverses, such as the Moore-Penrose inverse, group inverse, and Drazin inverse. In addition, we present a few algorithms for computing the outer inverses of the tensors. In particular, a $t$-QR decomposition based algorithm is developed for computing the outer inverses.
We consider in this paper a numerical approximation of Poisson-Nernst-Planck-Navier- Stokes (PNP-NS) system. We construct a decoupled semi-discrete and fully discrete scheme that enjoys the properties of positivity preserving, mass conserving, and unconditionally energy stability. Then, we establish the well-posedness and regularity of the initial and (periodic) boundary value problem of the PNP-NS system under suitable assumptions on the initial data, and carry out a rigorous convergence analysis for the fully discretized scheme. We also present some numerical results to validate the positivity-preserving property and the accuracy of our scheme.