亚洲男人的天堂2018av,欧美草比,久久久久久免费视频精选,国色天香在线看免费,久久久久亚洲av成人片仓井空

The present work is devoted to the eigenvalue asymptotic expansion of the Toeplitz matrix $T_{n}(a)$ whose generating function $a$ is complex valued and has a power singularity at one point. As a consequence, $T_{n}(a)$ is non-Hermitian and we know that the eigenvalue computation is a non-trivial task in the non-Hermitian setting for large sizes. We follow the work of Bogoya, B\"ottcher, Grudsky, and Maximenko and deduce a complete asymptotic expansion for the eigenvalues. After that, we apply matrix-less algorithms, in the spirit of the work by Ekstr\"om, Furci, Garoni, Serra-Capizzano et al, for computing those eigenvalues. Since the inner and extreme eigenvalues have different asymptotic behaviors, we worked on them independently, and combined the results to produce a high precision global numerical and matrix-less algorithm. The numerical results are very precise and the computational cost of the proposed algorithms is independent of the size of the considered matrices for each eigenvalue, which implies a linear cost when all the spectrum is computed. From the viewpoint of real world applications, we emphasize that the matrix class under consideration includes the matrices stemming from the numerical approximation of fractional diffusion equations. In the final conclusion section a concise discussion on the matter and few open problems are presented.

相關內容

Gaussian boson sampling, a computational model that is widely believed to admit quantum supremacy, has already been experimentally demonstrated to surpasses the classical simulation capabilities of even with the most powerful supercomputers today. However, whether the current approach limited by photon loss and noise in such experiments prescribes a scalable path to quantum advantage is an open question. For example, random circuit sampling with constant noise per gate was recently shown not to be a scalable approach to achieve quantum supremacy, although simulating intermediate scale systems is still difficult. To understand the effect of photon loss on the scability of Gaussian boson sampling, we use a tensor network algorithm with $U(1)$ symmetry to examine the asymptotic operator entanglement entropy scaling, which relates to the simulation complexity. We develop a custom-built algorithm that significantly reduces the computational time with state-of-the-art hardware accelerators, enabling simulations of much larger systems. With this capability, we observe, for Gaussian boson sampling, the crucial $N_\text{out}\propto\sqrt{N}$ scaling of the number of surviving photons in the number of input photons that marks the boundary between efficient and inefficient classical simulation. We further theoretically show that this should be general for other input states.

Combining ideas coming from Stone duality and Reynolds parametricity, we formulate in a clean and principled way a notion of profinite lambda-term which, we show, generalizes at every type the traditional notion of profinite word coming from automata theory. We start by defining the Stone space of profinite lambda-terms as a projective limit of finite sets of usual lambda-terms, considered modulo a notion of equivalence based on the finite standard model. One main contribution of the paper is to establish that, somewhat surprisingly, the resulting notion of profinite lambda-term coming from Stone duality lives in perfect harmony with the principles of Reynolds parametricity. In addition, we show that the notion of profinite lambda-term is compositional by constructing a cartesian closed category of profinite lambda-terms, and we establish that the embedding from lambda-terms modulo beta-eta-conversion to profinite lambda-terms is faithful using Statman's finite completeness theorem. Finally, we prove a parametricity theorem for Church encodings of word and ranked tree languages, which states that every parametric family of elements in the finite standard model is the interpretation of a profinite lambda-term. This result shows that our notion of profinite lambda-term conservatively extends the existing notion of profinite word and provides a natural framework for defining and studying profinite trees.

This article proposes a generalisation of the delete-$d$ jackknife to solve hyperparameter selection problems for time series. I call it artificial delete-$d$ jackknife to stress that this approach substitutes the classic removal step with a fictitious deletion, wherein observed datapoints are replaced with artificial missing values. This procedure keeps the data order intact and allows plain compatibility with time series. This manuscript justifies the use of this approach asymptotically and shows its finite-sample advantages through simulation studies. Besides, this article describes its real-world advantages by regulating high-dimensional forecasting models for foreign exchange rates.

Many existing reinforcement learning (RL) methods employ stochastic gradient iteration on the back end, whose stability hinges upon a hypothesis that the data-generating process mixes exponentially fast with a rate parameter that appears in the step-size selection. Unfortunately, this assumption is violated for large state spaces or settings with sparse rewards, and the mixing time is unknown, making the step size inoperable. In this work, we propose an RL methodology attuned to the mixing time by employing a multi-level Monte Carlo estimator for the critic, the actor, and the average reward embedded within an actor-critic (AC) algorithm. This method, which we call \textbf{M}ulti-level \textbf{A}ctor-\textbf{C}ritic (MAC), is developed especially for infinite-horizon average-reward settings and neither relies on oracle knowledge of the mixing time in its parameter selection nor assumes its exponential decay; it, therefore, is readily applicable to applications with slower mixing times. Nonetheless, it achieves a convergence rate comparable to the state-of-the-art AC algorithms. We experimentally show that these alleviated restrictions on the technical conditions required for stability translate to superior performance in practice for RL problems with sparse rewards.

Random butterfly matrices were introduced by Parker in 1995 to remove the need for pivoting when using Gaussian elimination. The growing applications of butterfly matrices have often eclipsed the mathematical understanding of how or why butterfly matrices are able to accomplish these given tasks. To help begin to close this gap using theoretical and numerical approaches, we explore the impact on the growth factor of preconditioning a linear system by butterfly matrices. These results are compared to other common methods found in randomized numerical linear algebra. In these experiments, we show preconditioning using butterfly matrices has a more significant dampening impact on large growth factors than other common preconditioners and a smaller increase to minimal growth factor systems. Moreover, we are able to determine the full distribution of the growth factors for a subclass of random butterfly matrices. Previous results by Trefethen and Schreiber relating to the distribution of random growth factors were limited to empirical estimates of the first moment for Ginibre matrices.

It is well known that most of the existing theoretical results in statistics are based on the assumption that the sample is generated with replacement from an infinite population. However, in practice, available samples are almost always collected without replacement. If the population is a finite set of real numbers, whether we can still safely use the results from samples drawn without replacement becomes an important problem. In this paper, we focus on the eigenvalues of high-dimensional sample covariance matrices generated without replacement from finite populations. Specifically, we derive the Tracy-Widom laws for their largest eigenvalues and apply these results to parallel analysis. We provide new insight into the permutation methods proposed by Buja and Eyuboglu in [Multivar Behav Res. 27(4) (1992) 509--540]. Simulation and real data studies are conducted to demonstrate our results.

We introduce a new class of spatially stochastic physics and data informed deep latent models for parametric partial differential equations (PDEs) which operate through scalable variational neural processes. We achieve this by assigning probability measures to the spatial domain, which allows us to treat collocation grids probabilistically as random variables to be marginalised out. Adapting this spatial statistics view, we solve forward and inverse problems for parametric PDEs in a way that leads to the construction of Gaussian process models of solution fields. The implementation of these random grids poses a unique set of challenges for inverse physics informed deep learning frameworks and we propose a new architecture called Grid Invariant Convolutional Networks (GICNets) to overcome these challenges. We further show how to incorporate noisy data in a principled manner into our physics informed model to improve predictions for problems where data may be available but whose measurement location does not coincide with any fixed mesh or grid. The proposed method is tested on a nonlinear Poisson problem, Burgers equation, and Navier-Stokes equations, and we provide extensive numerical comparisons. We demonstrate significant computational advantages over current physics informed neural learning methods for parametric PDEs while improving the predictive capabilities and flexibility of these models.

Given a black box (oracle) for the evaluation of a univariate polynomial p(x) of a degree d, we seek its zeros, that is, the roots of the equation p(x)=0. At FOCS 2016 Louis and Vempala approximated a largest zero of such a real-rooted polynomial within $1/2^b$, by performing at NR cost of the evaluation of Newton's ratio p(x)/p'(x) at O(b\log(d)) points x. They readily extended this root-finder to record fast approximation of a largest eigenvalue of a symmetric matrix under the Boolean complexity model. We apply distinct approach and techniques to obtain more general results at the same computational cost.

Classic algorithms and machine learning systems like neural networks are both abundant in everyday life. While classic computer science algorithms are suitable for precise execution of exactly defined tasks such as finding the shortest path in a large graph, neural networks allow learning from data to predict the most likely answer in more complex tasks such as image classification, which cannot be reduced to an exact algorithm. To get the best of both worlds, this thesis explores combining both concepts leading to more robust, better performing, more interpretable, more computationally efficient, and more data efficient architectures. The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm. When integrating an algorithm into a neural architecture, it is important that the algorithm is differentiable such that the architecture can be trained end-to-end and gradients can be propagated back through the algorithm in a meaningful way. To make algorithms differentiable, this thesis proposes a general method for continuously relaxing algorithms by perturbing variables and approximating the expectation value in closed form, i.e., without sampling. In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable renderers, and differentiable logic gate networks. Finally, this thesis presents alternative training strategies for learning with algorithms.

In 1954, Alston S. Householder published Principles of Numerical Analysis, one of the first modern treatments on matrix decomposition that favored a (block) LU decomposition-the factorization of a matrix into the product of lower and upper triangular matrices. And now, matrix decomposition has become a core technology in machine learning, largely due to the development of the back propagation algorithm in fitting a neural network. The sole aim of this survey is to give a self-contained introduction to concepts and mathematical tools in numerical linear algebra and matrix analysis in order to seamlessly introduce matrix decomposition techniques and their applications in subsequent sections. However, we clearly realize our inability to cover all the useful and interesting results concerning matrix decomposition and given the paucity of scope to present this discussion, e.g., the separated analysis of the Euclidean space, Hermitian space, Hilbert space, and things in the complex domain. We refer the reader to literature in the field of linear algebra for a more detailed introduction to the related fields.

北京阿比特科技有限公司