亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

Spectral deferred corrections (SDC) are a class of iterative methods for the numerical solution of ordinary differential equations. SDC can be interpreted as a Picard iteration to solve a fully implicit collocation problem, preconditioned with a low-order method. It has been widely studied for first-order problems, using explicit, implicit or implicit-explicit Euler and other low-order methods as preconditioner. For first-order problems, SDC achieves arbitrary order of accuracy and possesses good stability properties. While numerical results for SDC applied to the second-order Lorentz equations exist, no theoretical results are available for SDC applied to second-order problems. We present an analysis of the convergence and stability properties of SDC using velocity-Verlet as the base method for general second-order initial value problems. Our analysis proves that the order of convergence depends on whether the force in the system depends on the velocity. We also demonstrate that the SDC iteration is stable under certain conditions. Finally, we show that SDC can be computationally more efficient than a simple Picard iteration or a fourth-order Runge-Kutta-Nystr\"om method.

相關內容

We study the optimal sample complexity of neighbourhood selection in linear structural equation models, and compare this to best subset selection (BSS) for linear models under general design. We show by example that -- even when the structure is \emph{unknown} -- the existence of underlying structure can reduce the sample complexity of neighbourhood selection. This result is complicated by the possibility of path cancellation, which we study in detail, and show that improvements are still possible in the presence of path cancellation. Finally, we support these theoretical observations with experiments. The proof introduces a modified BSS estimator, called klBSS, and compares its performance to BSS. The analysis of klBSS may also be of independent interest since it applies to arbitrary structured models, not necessarily those induced by a structural equation model. Our results have implications for structure learning in graphical models, which often relies on neighbourhood selection as a subroutine.

We consider the numerical approximation of different ordinary differential equations (ODEs) and partial differential equations (PDEs) with periodic boundary conditions involving a one-dimensional random parameter, comparing the intrusive and non-intrusive polynomial chaos expansion (PCE) method. We demonstrate how to modify two schemes for intrusive PCE (iPCE) which are highly efficient in solving nonlinear reaction-diffusion equations: A second-order exponential time differencing scheme (ETD-RDP-IF) as well as a spectral exponential time differencing fourth-order Runge-Kutta scheme (ETDRK4). In numerical experiments, we show that these schemes show superior accuracy to simpler schemes such as the EE scheme for a range of model equations and we investigate whether they are competitive with non-intrusive PCE (niPCE) methods. We observe that the iPCE schemes are competitive with niPCE for some model equations, but that iPCE breaks down for complex pattern formation models such as the Gray-Scott system.

Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples can be difficult through standard methods. Inference can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. In this paper, we develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in this threshold choice and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation. We apply our method to the well-known, troublesome example of the River Nidd dataset.

Ordinary differential equations (ODEs) can provide mechanistic models of temporally local changes of processes, where parameters are often informed by external knowledge. While ODEs are popular in systems modeling, they are less established for statistical modeling of longitudinal cohort data, e.g., in a clinical setting. Yet, modeling of local changes could also be attractive for assessing the trajectory of an individual in a cohort in the immediate future given its current status, where ODE parameters could be informed by further characteristics of the individual. However, several hurdles so far limit such use of ODEs, as compared to regression-based function fitting approaches. The potentially higher level of noise in cohort data might be detrimental to ODEs, as the shape of the ODE solution heavily depends on the initial value. In addition, larger numbers of variables multiply such problems and might be difficult to handle for ODEs. To address this, we propose to use each observation in the course of time as the initial value to obtain multiple local ODE solutions and build a combined estimator of the underlying dynamics. Neural networks are used for obtaining a low-dimensional latent space for dynamic modeling from a potentially large number of variables, and for obtaining patient-specific ODE parameters from baseline variables. Simultaneous identification of dynamic models and of a latent space is enabled by recently developed differentiable programming techniques. We illustrate the proposed approach in an application with spinal muscular atrophy patients and a corresponding simulation study. In particular, modeling of local changes in health status at any point in time is contrasted to the interpretation of functions obtained from a global regression. This more generally highlights how different application settings might demand different modeling strategies.

We construct a family of finite element sub-complexes of the conformal complex on tetrahedral meshes. This complex includes vector fields and symmetric and traceless tensor fields, interlinked through the conformal Killing operator, the linearized Cotton-York operator, and the divergence operator, respectively. This leads to discrete versions of transverse traceless (TT) tensors and York splits in general relativity. We provide bubble complexes and investigate supersmoothness to facilitate the construction. We show the exactness of the finite element complex on contractible domains.

For the numerical solution of the cubic nonlinear Schr\"{o}dinger equation with periodic boundary conditions, a pseudospectral method in space combined with a filtered Lie splitting scheme in time is considered. This scheme is shown to converge even for initial data with very low regularity. In particular, for data in $H^s(\mathbb T^2)$, where $s>0$, convergence of order $\mathcal O(\tau^{s/2}+N^{-s})$ is proved in $L^2$. Here $\tau$ denotes the time step size and $N$ the number of Fourier modes considered. The proof of this result is carried out in an abstract framework of discrete Bourgain spaces, the final convergence result, however, is given in $L^2$. The stated convergence behavior is illustrated by several numerical examples.

We consider a general family of nonlocal in space and time diffusion equations with space-time dependent diffusivity and prove convergence of finite difference schemes in the context of viscosity solutions under very mild conditions. The proofs, based on regularity properties and compactness arguments on the numerical solution, allow to inherit a number of interesting results for the limit equation. More precisely, assuming H\"older regularity only on the initial condition, we prove convergence of the scheme, space-time H\"older regularity of the solution depending on the fractional orders of the operators, as well as specific blow up rates of the first time derivative. Finally, using the obtained regularity results, we are able to prove orders of convergence of the scheme in some cases. These results are consistent with previous studies. The schemes' performance is further numerically verified using both constructed exact solutions and realistic examples. Our experiments show that multithreaded implementation yields an efficient method to solve nonlocal equations numerically.

We present a new Krylov subspace recycling method for solving a linear system of equations, or a sequence of slowly changing linear systems. Our new method, named GMRES-SDR, combines randomized sketching and deflated restarting in a way that avoids orthogononalizing a full Krylov basis. We provide new theory which characterizes sketched GMRES with and without augmentation as a projection method using a semi-inner product. We present results of numerical experiments demonstrating the effectiveness of GMRES-SDR over competitor methods such as GMRES-DR and GCRO-DR.

Common regularization algorithms for linear regression, such as LASSO and Ridge regression, rely on a regularization hyperparameter that balances the tradeoff between minimizing the fitting error and the norm of the learned model coefficients. As this hyperparameter is scalar, it can be easily selected via random or grid search optimizing a cross-validation criterion. However, using a scalar hyperparameter limits the algorithm's flexibility and potential for better generalization. In this paper, we address the problem of linear regression with l2-regularization, where a different regularization hyperparameter is associated with each input variable. We optimize these hyperparameters using a gradient-based approach, wherein the gradient of a cross-validation criterion with respect to the regularization hyperparameters is computed analytically through matrix differential calculus. Additionally, we introduce two strategies tailored for sparse model learning problems aiming at reducing the risk of overfitting to the validation data. Numerical examples demonstrate that our multi-hyperparameter regularization approach outperforms LASSO, Ridge, and Elastic Net regression. Moreover, the analytical computation of the gradient proves to be more efficient in terms of computational time compared to automatic differentiation, especially when handling a large number of input variables. Application to the identification of over-parameterized Linear Parameter-Varying models is also presented.

The main reason for query model's prominence in complexity theory and quantum computing is the presence of concrete lower bounding techniques: polynomial and adversary method. There have been considerable efforts to give lower bounds using these methods, and to compare/relate them with other measures based on the decision tree. We explore the value of these lower bounds on quantum query complexity and their relation with other decision tree based complexity measures for the class of symmetric functions, arguably one of the most natural and basic sets of Boolean functions. We show an explicit construction for the dual of the positive adversary method and also of the square root of private coin certificate game complexity for any total symmetric function. This shows that the two values can't be distinguished for any symmetric function. Additionally, we show that the recently introduced measure of spectral sensitivity gives the same value as both positive adversary and approximate degree for every total symmetric Boolean function. Further, we look at the quantum query complexity of Gap Majority, a partial symmetric function. It has gained importance recently in regard to understanding the composition of randomized query complexity. We characterize the quantum query complexity of Gap Majority and show a lower bound on noisy randomized query complexity (Ben-David and Blais, FOCS 2020) in terms of quantum query complexity. Finally, we study how large certificate complexity and block sensitivity can be as compared to sensitivity for symmetric functions (even up to constant factors). We show tight separations, i.e., give upper bounds on possible separations and construct functions achieving the same.

北京阿比特科技有限公司