亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

In this paper, we investigate fast algorithms to approximate the Caputo derivative $^C_0D_t^\alpha u(t)$ when $\alpha$ is small. We focus on two fast algorithms, i.e. FIR and FIDR, both relying on the sum-of-exponential approximation to reduce the cost of evaluating the history part. FIR is the numerical scheme originally proposed in [16], and FIDR is an alternative scheme we propose in this work, and the latter shows superiority when $\alpha$ is small. With quantitative estimates, we prove that given a certain error threshold, the computational cost of evaluating the history part of the Caputo derivative can be decreased as $\alpha$ gets small. Hence, only minimal cost for the fast evaluation is required in the small $\alpha$ regime, which matches prevailing protocols in engineering practice. We also present a stability and error analysis of FIDR for solving linear fractional diffusion equations. Finally, we carry out systematic numerical studies for the performances of both FIR and FIDR schemes, where we explore the trade-off between accuracy and efficiency when $\alpha$ is small.

相關內容

FAST:Conference on File and Storage Technologies。 Explanation:文件和存儲技術會議。 Publisher:USENIX。 SIT:

Recent development of Deep Reinforcement Learning (DRL) has demonstrated superior performance of neural networks in solving challenging problems with large or even continuous state spaces. One specific approach is to deploy neural networks to approximate value functions by minimising the Mean Squared Bellman Error (MSBE) function. Despite great successes of DRL, development of reliable and efficient numerical algorithms to minimise the MSBE is still of great scientific interest and practical demand. Such a challenge is partially due to the underlying optimisation problem being highly non-convex or using incomplete gradient information as done in Semi-Gradient algorithms. In this work, we analyse the MSBE from a smooth optimisation perspective and develop an efficient Approximate Newton's algorithm. First, we conduct a critical point analysis of the error function and provide technical insights on optimisation and design choices for neural networks. When the existence of global minima is assumed and the objective fulfils certain conditions, suboptimal local minima can be avoided when using over-parametrised neural networks. We construct a Gauss Newton Residual Gradient algorithm based on the analysis in two variations. The first variation applies to discrete state spaces and exact learning. We confirm theoretical properties of this algorithm such as being locally quadratically convergent to a global minimum numerically. The second employs sampling and can be used in the continuous setting. We demonstrate feasibility and generalisation capabilities of the proposed algorithm empirically using continuous control problems and provide a numerical verification of our critical point analysis. We outline the difficulties of combining Semi-Gradient approaches with Hessian information. To benefit from second-order information complete derivatives of the MSBE must be considered during training.

In this paper, we focus on the design and analysis of the Analog Fountain Code (AFC) for short packet communications. We first propose a density evolution (DE) based framework, which tracks the evolution of the probability density function of the messages exchanged between variable and check nodes of AFC in the belief propagation decoder. Using the proposed DE framework, we formulate an optimisation problem to find the optimal AFC code parameters, including the weight-set, which minimises the bit error rate at a given signal-to-noise ratio (SNR). Our results show the superiority of our AFC code design compared to existing designs of AFC in the literature and thus the validity of the proposed DE framework in the asymptotically long block length regime. We then focus on selecting the precoder to improve the performance of AFC at short block lengths. Simulation results show that lower precode rates obtain better realised rates over a wide SNR range for short information block lengths. We also discuss the complexity of the AFC decoder and propose a threshold-based decoder to reduce the complexity.

Motivated by comparing the convergence behavior of Gegenbauer projections and best approximations, we study the optimal rate of convergence for Gegenbauer projections in the maximum norm. We show that the rate of convergence of Gegenbauer projections is the same as that of best approximations under conditions of the underlying function is either analytic on and within an ellipse and $\lambda\leq0$ or differentiable and $\lambda\leq1$, where $\lambda$ is the parameter in Gegenbauer projections. If the underlying function is analytic and $\lambda>0$ or differentiable and $\lambda>1$, then the rate of convergence of Gegenbauer projections is slower than that of best approximations by factors of $n^{\lambda}$ and $n^{\lambda-1}$, respectively. An exceptional case is functions with endpoint singularities, for which Gegenbauer projections and best approximations converge at the same rate for all $\lambda>-1/2$. For functions with interior or endpoint singularities, we provide a theoretical explanation for the error localization phenomenon of Gegenbauer projections and for why the accuracy of Gegenbauer projections is better than that of best approximations except in small neighborhoods of the critical points. Our analysis provides fundamentally new insight into the power of Gegenbauer approximations and related spectral methods.

In this paper, we derive improved a priori error estimates for families of hybridizable interior penalty discontinuous Galerkin (H-IP) methods using a variable penalty for second-order elliptic problems. The strategy is to use a penalization function of the form $\mathcal{O}(1/h^{1+\delta})$, where $h$ denotes the mesh size and $\delta$ is a user-dependent parameter. We then quantify its direct impact on the convergence analysis, namely, the (strong) consistency, discrete coercivity, and boundedness (with $h^{\delta}$-dependency), and we derive updated error estimates for both discrete energy- and $L^{2}$-norms. The originality of the error analysis relies specifically on the use of conforming interpolants of the exact solution. All theoretical results are supported by numerical evidence.

Identification of a linear time-invariant dynamical system from partial observations is a fundamental problem in control theory. Particularly challenging are systems exhibiting long-term memory. A natural question is how learn such systems with non-asymptotic statistical rates depending on the inherent dimensionality (order) $d$ of the system, rather than on the possibly much larger memory length. We propose an algorithm that given a single trajectory of length $T$ with gaussian observation noise, learns the system with a near-optimal rate of $\widetilde O\left(\sqrt\frac{d}{T}\right)$ in $\mathcal{H}_2$ error, with only logarithmic, rather than polynomial dependence on memory length. We also give bounds under process noise and improved bounds for learning a realization of the system. Our algorithm is based on multi-scale low-rank approximation: SVD applied to Hankel matrices of geometrically increasing sizes. Our analysis relies on careful application of concentration bounds on the Fourier domain -- we give sharper concentration bounds for sample covariance of correlated inputs and for $\mathcal H_\infty$ norm estimation, which may be of independent interest.

Block coordinate descent (BCD), also known as nonlinear Gauss-Seidel, is a simple iterative algorithm for nonconvex optimization that sequentially minimizes the objective function in each block coordinate while the other coordinates are held fixed. We propose a version of BCD that, for block multi-convex and smooth objective functions under constraints, is guaranteed to converge to the stationary points with worst-case rate of convergence of $O((\log n)^{2}/n)$ for $n$ iterations, and a bound of $O(\epsilon^{-1}(\log \epsilon^{-1})^{2})$ for the number of iterations to achieve an $\epsilon$-approximate stationary point. Furthermore, we show that these results continue to hold even when the convex sub-problems are inexactly solved if the optimality gaps are uniformly summable against initialization. A key idea is to restrict the parameter search within a diminishing radius to promote stability of iterates. As an application, we provide an alternating least squares algorithm with diminishing radius for nonnegative CP tensor decomposition that converges to the stationary points of the reconstruction error with the same robust worst-case convergence rate and complexity bounds. We also experimentally validate our results with both synthetic and real-world data and demonstrate that using auxiliary search radius restriction can in fact improve the rate of convergence.

Risk-limiting audits (RLAs), an ingredient in evidence-based elections, are increasingly common. They are a rigorous statistical means of ensuring that electoral results are correct, usually without having to perform an expensive full recount -- at the cost of some controlled probability of error. A recently developed approach for conducting RLAs, SHANGRLA, provides a flexible framework that can encompass a wide variety of social choice functions and audit strategies. Its flexibility comes from reducing sufficient conditions for outcomes to be correct to canonical `assertions' that have a simple mathematical form. Assertions have been developed for auditing various social choice functions including plurality, multi-winner plurality, super-majority, Hamiltonian methods, and instant runoff voting. However, there is no systematic approach to building assertions. Here, we show that assertions with linear dependence on transformations of the votes can easily be transformed to canonical form for SHANGRLA. We illustrate the approach by constructing assertions for party-list elections such as Hamiltonian free list elections and elections using the D'Hondt method, expanding the set of social choice functions to which SHANGRLA applies directly.

Given a point set $P$ in the plane, we seek a subset $Q\subseteq P$, whose convex hull gives a smaller and thus simpler representation of the convex hull of $P$. Specifically, let $cost(Q,P)$ denote the Hausdorff distance between the convex hulls $\mathcal{CH}(Q)$ and $\mathcal{CH}(P)$. Then given a value $\varepsilon>0$ we seek the smallest subset $Q\subseteq P$ such that $cost(Q,P)\leq \varepsilon$. We also consider the dual version, where given an integer $k$, we seek the subset $Q\subseteq P$ which minimizes $cost(Q,P)$, such that $|Q|\leq k$. For these problems, when $P$ is in convex position, we respectively give an $O(n\log^2 n)$ time algorithm and an $O(n\log^3 n)$ time algorithm, where the latter running time holds with high probability. When there is no restriction on $P$, we show the problem can be reduced to APSP in an unweighted directed graph, yielding an $O(n^{2.5302})$ time algorithm when minimizing $k$ and an $O(\min\{n^{2.5302}, kn^{2.376}\})$ time algorithm when minimizing $\varepsilon$, using prior results for APSP. Finally, we show our near linear algorithms for convex position give 2-approximations for the general case.

The nonlinear space-fractional problems often allow multiple stationary solutions, which can be much more complicated than the corresponding integer-order problems. In this paper, we systematically compute the solution landscapes of nonlinear constant/variable-order space-fractional problems. A fast approximation algorithm is developed to deal with the variable-order spectral fractional Laplacian by approximating the variable-indexing Fourier modes, and then combined with saddle dynamics to construct the solution landscape of variable-order space-fractional phase field model. Numerical experiments are performed to substantiate the accuracy and efficiency of fast approximation algorithm and elucidate essential features of the stationary solutions of space-fractional phase field model. Furthermore, we demonstrate that the solution landscapes of spectral fractional Laplacian problems can be reconfigured by varying the diffusion coefficients in the corresponding integer-order problems.

We develop an approach to risk minimization and stochastic optimization that provides a convex surrogate for variance, allowing near-optimal and computationally efficient trading between approximation and estimation error. Our approach builds off of techniques for distributionally robust optimization and Owen's empirical likelihood, and we provide a number of finite-sample and asymptotic results characterizing the theoretical performance of the estimator. In particular, we show that our procedure comes with certificates of optimality, achieving (in some scenarios) faster rates of convergence than empirical risk minimization by virtue of automatically balancing bias and variance. We give corroborating empirical evidence showing that in practice, the estimator indeed trades between variance and absolute performance on a training sample, improving out-of-sample (test) performance over standard empirical risk minimization for a number of classification problems.

北京阿比特科技有限公司