亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In the paper [Hainaut, D. and Colwell, D.B., A structural model for credit risk with switchingprocesses and synchronous jumps, The European Journal of Finance44(33) (4238):3262-3284],the authors exploit a synchronous-jump regime-switching model to compute the default probabilityof a publicly traded company. Here, we first generalize the proposed L\'evy model to more generalsetting of tempered stable processes recently introduced into the finance literature. Based on thesingularity of the resulting partial integro-differential operator, we propose a general frameworkbased on strictly positive-definite functions to de-singularize the operator. We then analyze anefficient meshfree collocation method based on radial basis functions to approximate the solution ofthe corresponding system of partial integro-differential equations arising from the structural creditrisk model. We show that under some regularity assumptions, our proposed method naturallyde-sinularizes the problem in the tempered stable case. Numerical results of applying the methodon some standard examples from the literature confirms the accuracy of our theoretical results andnumerical algorithm.

相關內容

Elimination of unknowns in a system of differential equations is often required when analysing (possibly nonlinear) dynamical systems models, where only a subset of variables are observable. One such analysis, identifiability, often relies on computing input-output relations via differential algebraic elimination. Determining identifiability, a natural prerequisite for meaningful parameter estimation, is often prohibitively expensive for medium to large systems due to the computationally expensive task of elimination. We propose an algorithm that computes a description of the set of differential-algebraic relations between the input and output variables of a dynamical system model. The resulting algorithm outperforms general-purpose software for differential elimination on a set of benchmark models from literature. We use the designed elimination algorithm to build a new randomized algorithm for assessing structural identifiability of a parameter in a parametric model. A parameter is said to be identifiable if its value can be uniquely determined from input-output data assuming the absence of noise and sufficiently exciting inputs. Our new algorithm allows the identification of models that could not be tackled before. Our implementation is publicly available as a Julia package at //github.com/SciML/StructuralIdentifiability.jl.

To gain a better theoretical understanding of how evolutionary algorithms (EAs) cope with plateaus of constant fitness, we propose the $n$-dimensional Plateau$_k$ function as natural benchmark and analyze how different variants of the $(1 + 1)$ EA optimize it. The Plateau$_k$ function has a plateau of second-best fitness in a ball of radius $k$ around the optimum. As evolutionary algorithm, we regard the $(1 + 1)$ EA using an arbitrary unbiased mutation operator. Denoting by $\alpha$ the random number of bits flipped in an application of this operator and assuming that $\Pr[\alpha = 1]$ has at least some small sub-constant value, we show the surprising result that for all constant $k \ge 2$, the runtime $T$ follows a distribution close to the geometric one with success probability equal to the probability to flip between $1$ and $k$ bits divided by the size of the plateau. Consequently, the expected runtime is the inverse of this number, and thus only depends on the probability to flip between $1$ and $k$ bits, but not on other characteristics of the mutation operator. Our result also implies that the optimal mutation rate for standard bit mutation here is approximately $k/(en)$. Our main analysis tool is a combined analysis of the Markov chains on the search point space and on the Hamming level space, an approach that promises to be useful also for other plateau problems.

We investigate the calibration of estimations to increase performance with an optimal monotone transform on the estimator outputs. We start by studying the traditional square error setting with its weighted variant and show that the optimal monotone transform is in the form of a unique staircase function. We further show that this staircase behavior is preserved for general strictly convex loss functions. Their optimal monotone transforms are also unique, i.e., there exist a single staircase transform that achieves the minimum loss. We propose a linear time and space algorithm that can find such optimal transforms for specific loss settings. Our algorithm has an online implementation where the optimal transform for the samples observed so far are found in linear space and amortized time when the samples arrive in an ordered fashion. We also extend our results to cases where the functions are not trivial to individually optimize and propose an anytime algorithm, which has linear space and pseudo-linearithmic time complexity.

Minimum residual methods such as the least-squares finite element method (FEM) or the discontinuous Petrov--Galerkin method with optimal test functions (DPG) usually exclude singular data, e.g., non square-integrable loads. We consider a DPG method and a least-squares FEM for the Poisson problem. For both methods we analyze regularization approaches that allow the use of $H^{-1}$ loads, and also study the case of point loads. For all cases we prove appropriate convergence orders. We present various numerical experiments that confirm our theoretical results. Our approach extends to general well-posed second-order problems.

In this work a non-conservative balance law formulation is considered that encompasses the rotating, compressible Euler equations for dry atmospheric flows. We develop a semi-discretely entropy stable discontinuous Galerkin method on curvilinear meshes using a generalization of flux differencing for numerical fluxes in fluctuation form. The method uses the skew-hybridized formulation of the element operators to ensure that, even in the presence of under-integration on curvilinear meshes, the resulting discretization is entropy stable. Several atmospheric flow test cases in one, two, and three dimensions confirm the theoretical entropy stability results as well as show the high-order accuracy and robustness of the method.

In this paper, we establish minimax optimal rates of convergence for prediction in a semi-functional linear model that consists of a functional component and a less smooth nonparametric component. Our results reveal that the smoother functional component can be learned with the minimax rate as if the nonparametric component were known. More specifically, a double-penalized least squares method is adopted to estimate both the functional and nonparametric components within the framework of reproducing kernel Hilbert spaces. By virtue of the representer theorem, an efficient algorithm that requires no iterations is proposed to solve the corresponding optimization problem, where the regularization parameters are selected by the generalized cross validation criterion. Numerical studies are provided to demonstrate the effectiveness of the method and to verify the theoretical analysis.

A method is presented for the evaluation of integrals on tetrahedra where the integrand has a singularity at one vertex. The approach uses a transformation to spherical polar coordinates which explicitly eliminates the singularity and facilitates the evaluation of integration limits. The method can also be implemented in an adaptive form which gives convergence to a required tolerance. Results from the method are compared to the output from an exact analytical method and show high accuracy. In particular, when the adaptive algorithm is used, highly accurate results are found for poorly conditioned tetrahedra which normally present difficulties for numerical quadrature techniques.

We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model settings from the literature. We prove that our algorithm achieves optimal $\mathcal{O}(n/k)$ and $\mathcal{O}(n^2/k^2)$ convergence rates (up to a constant factor) in two cases: general convexity and strong convexity, respectively, where $k$ is the iteration counter and n is the number of block-coordinates. Our convergence rates are obtained through three criteria: primal objective residual and primal feasibility violation, dual objective residual, and primal-dual expected gap. Moreover, our rates for the primal problem are on the last iterate sequence. Our dual convergence guarantee requires additionally a Lipschitz continuity assumption. We specify our algorithm to handle two important special cases, where our rates are still applied. Finally, we verify our algorithm on two well-studied numerical examples and compare it with two existing methods. Our results show that the proposed method has encouraging performance on different experiments.

We consider the problems of the numerical solution of the Cauchy problem for an evolutionary equation with memory when the kernel of the integral term is a difference one. The computational implementation is associated with the need to work with an approximate solution for all previous points in time. In this paper, the considered nonlocal problem is transformed into a local one; a loosely coupled equation system with additional ordinary differential equations is solved. This approach is based on the approximation of the difference kernel by the sum of exponentials. Estimates for the stability of the solution concerning the initial data and the right-hand side for the corresponding Cauchy problem are obtained. Two-level schemes with weights with convenient computational implementation are constructed and investigated. The theoretical consideration is supplemented by the results of the numerical solution of the integrodifferential equation when the kernel is the stretching exponential function.

We consider the problem of testing for long-range dependence for time-varying coefficient regression models. The covariates and errors are assumed to be locally stationary, which allows complex temporal dynamics and heteroscedasticity. We develop KPSS, R/S, V/S, and K/S-type statistics based on the nonparametric residuals, and propose bootstrap approaches equipped with a difference-based long-run covariance matrix estimator for practical implementation. Under the null hypothesis, the local alternatives as well as the fixed alternatives, we derive the limiting distributions of the test statistics, establish the uniform consistency of the difference-based long-run covariance estimator, and justify the bootstrap algorithms theoretically. In particular, the exact local asymptotic power of our testing procedure enjoys the order $O( \log^{-1} n)$, the same as that of the classical KPSS test for long memory in strictly stationary series without covariates. We demonstrate the effectiveness of our tests by extensive simulation studies. The proposed tests are applied to a COVID-19 dataset in favor of long-range dependence in the cumulative confirmed series of COVID-19 in several countries, and to the Hong Kong circulatory and respiratory dataset, identifying a new type of 'spurious long memory'.

北京阿比特科技有限公司