亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

For valuing European options, a straightforward model is the well-known Black-Scholes formula. Contrary to market reality, this model assumed that interest rate and volatility are constant. To modify the Black-Scholes model, Heston and Cox-Ingersoll-Ross (CIR) offered the stochastic volatility and the stochastic interest rate models, respectively. The combination of the Heston, and the CIR models is called the Heston-Cox-Ingersoll-Ross (HCIR) model. Another essential issue that arises when purchasing or selling a good or service is the consideration of transaction costs which was ignored in the Black-Scholes technique. Leland improved the simplistic Black-Scholes strategy to take transaction costs into account. The main purpose of this paper is to apply the alternating direction implicit (ADI) method at a uniform grid for solving the HCIR model with transaction costs in the European style and comparing it with the explicit finite difference (EFD) scheme. Also, as evidence for numerical convergence, we convert the HCIR model with transaction costs to a linear PDE (HCIR) by ignoring transaction costs, then we estimate the solution of HCIR PDE using the ADI method which is a class of finite difference schemes, and compare it with analytical solution and EFD scheme. For multi-dimensional Black-Scholes equations, the ADI method, which is a category of finite difference techniques, is appropriate. When the dimensionality of the space increases, finite difference techniques frequently become more complex to perform, comprehend, and apply. Consequently, we employ the ADI approach to divide a multi-dimensional problem into several simpler, quite manageable sub-problems to overcome the dimensionality curse.

相關內容

We present a method to automatically calculate time to fixate (TTF) from the eye-tracker data in subjects with neurological impairment using a driving simulator. TTF presents the time interval for a person to notice the stimulus from its first occurrence. Precisely, we measured the time since the children started to cross the street until the drivers directed their look to the children. From 108 neurological patients recruited for the study, the analysis of TTF was performed in 56 patients to assess fit-, unfit-, and conditionally-fit-to-drive patients. The results showed that the proposed method based on the YOLO (you only look once) object detector is efficient for computing TTFs from the eye-tracker data. We obtained discriminative results for fit-to-drive patients by application of Tukey's honest significant difference post hoc test (p < 0.01), while no difference was observed between conditionally-fit and unfit-to-drive groups (p = 0.542). Moreover, we show that time-to-collision (TTC), initial gaze distance (IGD) from pedestrians, and speed at the hazard onset did not influence the result, while the only significant interaction is among fitness, IGD, and TTC on TTF. Obtained TTFs are also compared with the perception response times (PRT) calculated independently from eye-tracker data and YOLO. Although we reached statistically significant results that speak in favor of possible method application for assessment of fitness to drive, we provide detailed directions for future driving simulation-based evaluation and propose processing workflow to secure reliable TTF calculation and its possible application in for example psychology and neuroscience.

The main objective of the present paper is to construct a new class of space-time discretizations for the stochastic $p$-Stokes system and analyze its stability and convergence properties. We derive regularity results for the approximation that are similar to the natural regularity of solutions. One of the key arguments relies on discrete extrapolation that allows to relate lower moments of discrete maximal processes. We show that, if the generic spatial discretization is constraint conforming, then the velocity approximation satisfies a best-approximation property in the natural distance. Moreover, we present an example such that the resulting velocity approximation converges with rate $1/2$ in time and $1$ in space towards the (unknown) target velocity with respect to the natural distance.

We introduce a numerical approach to computing the Schr\"odinger map (SM) based on the Hasimoto transform which relates the SM flow to a cubic nonlinear Schr\"odinger (NLS) equation. In exploiting this nonlinear transform we are able to introduce the first fully explicit unconditionally stable symmetric integrators for the SM equation. Our approach consists of two parts: an integration of the NLS equation followed by the numerical evaluation of the Hasimoto transform. Motivated by the desire to study rough solutions to the SM equation, we also introduce a new symmetric low-regularity integrator for the NLS equation. This is combined with our novel fast low-regularity Hasimoto (FLowRH) transform, based on a tailored analysis of the resonance structures in the Magnus expansion and a fast realisation based on block-Toeplitz partitions, to yield an efficient low-regularity integrator for the SM equation. This scheme in particular allows us to obtain approximations to the SM in a more general regime (i.e. under lower regularity assumptions) than previously proposed methods. The favorable properties of our methods are exhibited both in theoretical convergence analysis and in numerical experiments.

We present a potent computational method for the solution of inverse problems in fluid mechanics. We consider inverse problems formulated in terms of a deterministic loss function that can accommodate data and regularization terms. We introduce a multigrid decomposition technique that accelerates the convergence of gradient-based methods for optimization problems with parameters on a grid. We incorporate this multigrid technique to the ODIL (Optimizing a DIscrete Loss) framework. The multiresolution ODIL (mODIL) accelerates by an order of magnitude the original formalism and improves the avoidance of local minima. Moreover, mODIL accommodates the use of automatic differentiation for calculating the gradients of the loss function, thus facilitating the implementation of the framework. We demonstrate the capabilities of mODIL on a variety of inverse and flow reconstruction problems: solution reconstruction for the Burgers equation, inferring conductivity from temperature measurements, and inferring the body shape from wake velocity measurements in three dimensions. We also provide a comparative study with the related, popular Physics-Informed Neural Networks (PINNs) method. We demonstrate that mODIL has three to five orders of magnitude lower computational cost than PINNs in benchmark problems including simple PDEs and lid-driven cavity problems. Our results suggest that mODIL is a very potent, fast and consistent method for solving inverse problems in fluid mechanics.

Recently significant progress has been made in vehicle prediction and planning algorithms for autonomous driving. However, it remains quite challenging for an autonomous vehicle to plan its trajectory in complex scenarios when it is difficult to accurately predict its surrounding vehicles' behaviors and trajectories. In this work, to maximize performance while ensuring safety, we propose a novel speculative planning framework based on a prediction-planning interface that quantifies both the behavior-level and trajectory-level uncertainties of surrounding vehicles. Our framework leverages recent prediction algorithms that can provide one or more possible behaviors and trajectories of the surrounding vehicles with probability estimation. It adapts those predictions based on the latest system states and traffic environment, and conducts planning to maximize the expected reward of the ego vehicle by considering the probabilistic predictions of all scenarios and ensure system safety by ruling out actions that may be unsafe in worst case. We demonstrate the effectiveness of our approach in improving system performance and ensuring system safety over other baseline methods, via extensive simulations in SUMO on a challenging multi-lane highway lane-changing case study.

Convergence rate analyses of random walk Metropolis-Hastings Markov chains on general state spaces have largely focused on establishing sufficient conditions for geometric ergodicity or on analysis of mixing times. Geometric ergodicity is a key sufficient condition for the Markov chain Central Limit Theorem and allows rigorous approaches to assessing Monte Carlo error. The sufficient conditions for geometric ergodicity of the random walk Metropolis-Hastings Markov chain are refined and extended, which allows the analysis of previously inaccessible settings such as Bayesian Poisson regression. The key technical innovation is the development of explicit drift and minorization conditions for random walk Metropolis-Hastings, which allows explicit upper and lower bounds on the geometric rate of convergence. Further, lower bounds on the geometric rate of convergence are also developed using spectral theory. The existing sufficient conditions for geometric ergodicity, to date, have not provided explicit constraints on the rate of geometric rate of convergence because the method used only implies the existence of drift and minorization conditions. The theoretical results are applied to random walk Metropolis-Hastings algorithms for a class of exponential families and generalized linear models that address Bayesian Regression problems.

Generative artificial intelligence holds enormous potential to revolutionize decision-making processes, from everyday to high-stake scenarios. However, as many decisions carry social implications, for AI to be a reliable assistant for decision-making it is crucial that it is able to capture the balance between self-interest and the interest of others. We investigate the ability of three of the most advanced chatbots to predict dictator game decisions across 78 experiments with human participants from 12 countries. We find that only GPT-4 (not Bard nor Bing) correctly captures qualitative behavioral patterns, identifying three major classes of behavior: self-interested, inequity-averse, and fully altruistic. Nonetheless, GPT-4 consistently overestimates other-regarding behavior, inflating the proportion of inequity-averse and fully altruistic participants. This bias has significant implications for AI developers and users.

We consider a class of problems of Discrete Tomography which has been deeply investigated in the past: the reconstruction of convex lattice sets from their horizontal and/or vertical X-rays, i.e. from the number of points in a sequence of consecutive horizontal and vertical lines. The reconstruction of the HV-convex polyominoes works usually in two steps, first the filling step consisting in filling operations, second the convex aggregation of the switching components. We prove three results about the convex aggregation step: (1) The convex aggregation step used for the reconstruction of HV-convex polyominoes does not always provide a solution. The example yielding to this result is called \textit{the bad guy} and disproves a conjecture of the domain. (2) The reconstruction of a digital convex lattice set from only one X-ray can be performed in polynomial time. We prove it by encoding the convex aggregation problem in a Directed Acyclic Graph. (3) With the same strategy, we prove that the reconstruction of fat digital convex sets from their horizontal and vertical X-rays can be solved in polynomial time. Fatness is a property of the digital convex sets regarding the relative position of the left, right, top and bottom points of the set. The complexity of the reconstruction of the lattice sets which are not fat remains an open question.

Modern mainstream financial theory is underpinned by the efficient market hypothesis, which posits the rapid incorporation of relevant information into asset pricing. Limited prior studies in the operational research literature have investigated tests designed for random number generators to check for these informational efficiencies. Treating binary daily returns as a hardware random number generator analogue, tests of overlapping permutations have indicated that these time series feature idiosyncratic recurrent patterns. Contrary to prior studies, we split our analysis into two streams at the annual and company level, and investigate longer-term efficiency over a larger time frame for Nasdaq-listed public companies to diminish the effects of trading noise and allow the market to realistically digest new information. Our results demonstrate that information efficiency varies across years and reflects large-scale market impacts such as financial crises. We also show the proximity to results of a well-tested pseudo-random number generator, discuss the distinction between theoretical and practical market efficiency, and find that the statistical qualification of stock-separated returns in support of the efficient market hypothesis is dependent on the driving factor of small inefficient subsets that skew market assessments.

The ability to envision future states is crucial to informed decision making while interacting with dynamic environments. With cameras providing a prevalent and information rich sensing modality, the problem of predicting future states from image sequences has garnered a lot of attention. Current state of the art methods typically train large parametric models for their predictions. Though often able to predict with accuracy, these models rely on the availability of large training datasets to converge to useful solutions. In this paper we focus on the problem of predicting future images of an image sequence from very little training data. To approach this problem, we use non-parametric models to take a probabilistic approach to image prediction. We generate probability distributions over sequentially predicted images and propagate uncertainty through time to generate a confidence metric for our predictions. Gaussian Processes are used for their data efficiency and ability to readily incorporate new training data online. We showcase our method by successfully predicting future frames of a smooth fluid simulation environment.

北京阿比特科技有限公司