In this paper a time-fractional Black-Scholes model (TFBSM) is considered to study the price change of the underlying fractal transmission system. We develop and analyze a numerical method to solve the TFBSM governing European options. The numerical method combines the exponential B-spline collocation to discretize in space and a finite difference method to discretize in time. The method is shown to be unconditionally stable using von-Neumann analysis. Also, the method is proved to be convergent of order two in space and $2-\mu$ is time, where $\mu$ is order of the fractional derivative. We implement the method on various numerical examples in order to illustrate the accuracy of the method, and validation of the theoretical findings. In addition, as an application, the method is used to price several different European options such as the European call option, European put option, and European double barrier knock-out call option.
We propose to combine the Carleman estimate and the Newton method to solve an inverse source problem for nonlinear parabolic equations from lateral boundary data. The stability of this inverse source problem is conditionally logarithmic. Hence, numerical results due to the conventional least squares optimization might not be reliable. In order to enhance the stability, we approximate this problem by truncating the high frequency terms of the Fourier series that represents the solution to the governing equation. By this, we derive a system of nonlinear elliptic PDEs whose solution consists of Fourier coefficients of the solution to the parabolic governing equation. We solve this system by the Carleman-Newton method. The Carleman-Newton method is a newly developed algorithm to solve nonlinear PDEs. The strength of the Carleman-Newton method includes (1) no good initial guess is required and (2) the computational cost is not expensive. These features are rigorously proved. Having the solutions to this system in hand, we can directly compute the solution to the proposed inverse problem. Some numerical examples are displayed.
We apply methods of machine-learning, such as neural networks, manifold learning and image processing, in order to study 2-dimensional amoebae in algebraic geometry and string theory. With the help of embedding manifold projection, we recover complicated conditions obtained from so-called lopsidedness. For certain cases it could even reach $\sim99\%$ accuracy, in particular for the lopsided amoeba of $F_0$ with positive coefficients which we place primary focus. Using weights and biases, we also find good approximations to determine the genus for an amoeba at lower computational cost. In general, the models could easily predict the genus with over $90\%$ accuracies. With similar techniques, we also investigate the membership problem, and image processing of the amoebae directly.
Stochastic kriging has been widely employed for simulation metamodeling to predict the response surface of complex simulation models. However, its use is limited to cases where the design space is low-dimensional because, in general, the sample complexity (i.e., the number of design points required for stochastic kriging to produce an accurate prediction) grows exponentially in the dimensionality of the design space. The large sample size results in both a prohibitive sample cost for running the simulation model and a severe computational challenge due to the need to invert large covariance matrices. Based on tensor Markov kernels and sparse grid experimental designs, we develop a novel methodology that dramatically alleviates the curse of dimensionality. We show that the sample complexity of the proposed methodology grows only slightly in the dimensionality, even under model misspecification. We also develop fast algorithms that compute stochastic kriging in its exact form without any approximation schemes. We demonstrate via extensive numerical experiments that our methodology can handle problems with a design space of more than 10,000 dimensions, improving both prediction accuracy and computational efficiency by orders of magnitude relative to typical alternative methods in practice.
Models of electrical excitation and recovery in the heart have become increasingly detailed, but have yet to be used routinely in the clinical setting to guide personalized intervention in patients. One of the main challenges is calibrating models from the limited measurements that can be made in a patient during a standard clinical procedure. In this work, we propose a novel framework for the probabilistic calibration of electrophysiology parameters on the left atrium of the heart using local measurements of cardiac excitability. Parameter fields are represented as Gaussian processes on manifolds and are linked to measurements via surrogate functions that map from local parameter values to measurements. The posterior distribution of parameter fields is then obtained. We show that our method can recover parameter fields used to generate localised synthetic measurements of effective refractory period. Our methodology is applicable to other measurement types collected with clinical protocols, and more generally for calibration where model parameters vary over a manifold.
We propose and analyze a class of particle methods for the Vlasov equation with a strong external magnetic field in a torus configuration. In this regime, the time step can be subject to stability constraints related to the smallness of Larmor radius. To avoid this limitation, our approach is based on higher-order semi-implicit numerical schemes already validated on dissipative systems [3] and for magnetic fields pointing in a fixed direction [9, 10, 12]. It hinges on asymptotic insights gained in [11] at the continuous level. Thus, when the magnitude of the external magnetic field is large, this scheme provides a consistent approximation of the guiding-center system taking into account curvature and variation of the magnetic field. Finally, we carry out a theoretical proof of consistency and perform several numerical experiments that establish a solid validation of the method and its underlying concepts.
Parallel-in-time methods for partial differential equations (PDEs) have been the subject of intense development over recent decades, particularly for diffusion-dominated problems. It has been widely reported in the literature, however, that many of these methods perform quite poorly for advection-dominated problems. Here we analyze the particular iterative parallel-in-time algorithm of multigrid reduction-in-time (MGRIT) for discretizations of constant-wave-speed linear advection problems. We focus on common method-of-lines discretizations that employ upwind finite differences in space and Runge-Kutta methods in time. Using a convergence framework we developed in previous work, we prove for a subclass of these discretizations that, if using the standard approach of rediscretizing the fine-grid problem on the coarse grid, robust MGRIT convergence with respect to CFL number and coarsening factor is not possible. This poor convergence and non-robustness is caused, at least in part, by an inadequate coarse-grid correction for smooth Fourier modes known as characteristic components.We propose an alternative coarse-grid that provides a better correction of these modes. This coarse-grid operator is related to previous work and uses a semi-Lagrangian discretization combined with an implicitly treated truncation error correction. Theory and numerical experiments show the coarse-grid operator yields fast MGRIT convergence for many of the method-of-lines discretizations considered, including for both implicit and explicit discretizations of high order.
Cyber-physical systems (CPSs) are usually complex and safety-critical; hence, it is difficult and important to guarantee that the system's requirements, i.e., specifications, are fulfilled. Simulation-based falsification of CPSs is a practical testing method that can be used to raise confidence in the correctness of the system by only requiring that the system under test can be simulated. As each simulation is typically computationally intensive, an important step is to reduce the number of simulations needed to falsify a specification. We study Bayesian optimization (BO), a sample-efficient method that learns a surrogate model that describes the relationship between the parametrization of possible input signals and the evaluation of the specification. In this paper, we improve the falsification using BO by; first adopting two prominent BO methods, one fits local surrogate models, and the other exploits the user's prior knowledge. Secondly, the formulation of acquisition functions for falsification is addressed in this paper. Benchmark evaluation shows significant improvements in using local surrogate models of BO for falsifying benchmark examples that were previously hard to falsify. Using prior knowledge in the falsification process is shown to be particularly important when the simulation budget is limited. For some of the benchmark problems, the choice of acquisition function clearly affects the number of simulations needed for successful falsification.
Evolution is the theory that plants and animals today have come from kinds that have existed in the past. Scientists such as Charles Darwin and Alfred Wallace dedicate their life to observe how species interact with their environment, grow, and change. We are able to predict future changes as well as simulate the process using genetic algorithms. Genetic Algorithms give us the opportunity to present multiple variables and parameters to an environment and change values to simulate different situations. By optimizing genetic algorithms to hold entities in an environment, we are able to assign varying characteristics such as speed, size, and cloning probability, to the entities to simulate real natural selection and evolution in a shorter period of time. Learning about how species grow and evolve allows us to find ways to improve technology, help animals going extinct to survive, and figure* out how diseases spread and possible ways of making an environment uninhabitable for them. Using data from an environment including genetic algorithms and parameters of speed, size, and cloning percentage, the ability to test several changes in the environment and observe how the species interacts within it appears. After testing different environments with a varied amount of food while keeping the number of starting population at 10 entities, it was found that an environment with a scarce amount of food was not sustainable for small and slow entities. All environments displayed an increase in speed, but the environments that were richer in food allowed for the entities to live for the entire duration of 50 generations, as well as allowed the population to grow significantly.
Nonlinear vibration energy harvesting systems can potentially increase the power collected from the kinetic energy available in their operating environment since they usually can recover energy in broadband frequencies compared to their linear counterpart. However, these systems have a high degree of complexity, sensitivity to slight variations of the parameters and the initial conditions, and may present multiple solutions. For these reasons, it is rare for the designer to have a deep understanding of the dynamic behavior of this type of nonlinear oscillator. This situation is even more peculiar when geometric imperfections from the system's manufacturing process are present, as they can significantly influence the energy recovery process. Intending to fill this lack of understanding about general aspects of the nonlinear dynamics of this kind of system, the present paper presents a broad numerical investigation of local and global characteristics of the underlying dynamical systems using bifurcation diagrams and basins of attraction. Bifurcation analysis is performed by exploring the broad spectrum of a harmonic signal, going from low to high amplitude and frequency of excitation. Basins of attraction analysis based on 0-1 test for chaos is proposed as an efficient statistical technique to identify chaotic and periodic solutions. Different levels of asymmetry are investigated, and a particular situation is defined and analyzed when a value of the sloping angle where the system is attached compensates for the asymmetry of the quadratic term. The result shows the different solutions defined by excitation forces and initial conditions, indicating the best scenario for increasing the power output. The adverse effects of the asymmetries are presented. However, we also demonstrated that it is possible to around this behavior using the sloping angle to compensate for the asymmetric influence
With the rapid increase of large-scale, real-world datasets, it becomes critical to address the problem of long-tailed data distribution (i.e., a few classes account for most of the data, while most classes are under-represented). Existing solutions typically adopt class re-balancing strategies such as re-sampling and re-weighting based on the number of observations for each class. In this work, we argue that as the number of samples increases, the additional benefit of a newly added data point will diminish. We introduce a novel theoretical framework to measure data overlap by associating with each sample a small neighboring region rather than a single point. The effective number of samples is defined as the volume of samples and can be calculated by a simple formula $(1-\beta^{n})/(1-\beta)$, where $n$ is the number of samples and $\beta \in [0,1)$ is a hyperparameter. We design a re-weighting scheme that uses the effective number of samples for each class to re-balance the loss, thereby yielding a class-balanced loss. Comprehensive experiments are conducted on artificially induced long-tailed CIFAR datasets and large-scale datasets including ImageNet and iNaturalist. Our results show that when trained with the proposed class-balanced loss, the network is able to achieve significant performance gains on long-tailed datasets.